In this project, an RGB-D camera was used to detect FRANKA and the chessboard. This part of the project allows us to fetch a single image from the camera livestream whenever it is needed. It is utilises OpenCV and is interfaced with ROS. RGB and depth information collected from the image frame is used in both automatic calibration procedures and within the Perception module for board and game state recognition.
The external RGB-D camera is connected through USB. In order to use OpenNI-compliant devices in ROS and launch the camera drive we need to launch the rospackage for the device:
roslaunch openni2_launch openni2.launch
You must run this on the client machine. If you run the camera rospackage on the host computer running the FRANKA control loop it will cause the controller to crash.
The camera subscriber was initially written to use Python processes. This gave us a significant boost in speed in our main runtime loop however we updated our version to use a time synchronised thread in order to simplify the code. Functionally, the two versions provide the same output. The full version can be viewed below at the header Camera Subscriber with Processes.
In the version without processes, a class was created which initialised the subscribers as members:
self.depth_sub = Subscriber("/camera/depth_registered/image_raw", Image) self.tss = message_filters.ApproximateTimeSynchronizer([self.image_sub, self.depth_sub], queue_size=10, slop=0.5) self.tss.registerCallback(self.callback) time.sleep(0.5)
The callback function of the class was then used in the synchronised thread to update the members of the class storing the latest rgb and depth image:
self.rgb_raw = img self.depth_raw = depth if self.debug: print("New images have been collected.")
This meant that during the runtime, when the latest image was needed, it could be fetched from the class (after converting into a CV image):
cv_image = None depth_image = None while cv_image is None and depth_image is None: try: cv_image = self.bridge.imgmsg_to_cv2(self.rgb_raw, "bgr8") except CvBridgeError as e: print(e) try: depth_image_raw = self.bridge.imgmsg_to_cv2(self.depth_raw, "passthrough") # noinspection PyRedundantParentheses depth_image = ((255 * depth_image_raw)).astype(np.uint8) except CvBridgeError as e: print(e) if self.debug: print("Image has been converted.") return cv_image, depth_image
You can download this simpler version of the camera subscriber
Camera Subscriber with Processes¶
This version consists two classes and process function. The
CameraFeed class creates a seperate process using the
main() function. This function starts up the
ImageConverter class which handles the Subscribers and image conversion into OpenCV format. A queue is created between the main function running in the new process and the original
CameraFeed class running in the usual process.
Multiprocessing was used to spawn multiple subprocesses for parallel execution of tasks. First Queues are initialised holding RGB and depth images that need to be processed, and set the maxsize to
1. Therefore, only one image can be held at a time.
get_frames() method is used to to retrieve the results from
queue and return to the caller:
def get_frames(self): while self.rgb_q.empty() and self.depth_q.empty(): time.sleep(0.1) # we fetch latest image from queue rgb_frame = self.rgb_q.get() depth_frame = self.depth_q.get() return rgb_frame, depth_frame
In the background process, the callback function is being called. This is attempting to add the latest image to the queue for the parent process to collect it. When it fetches a new image from the subscriber it will place it in the queue. If the queue is alread full, it overwrites the image. This ensures there is only ever one image in the queue. This occurs for both the RGB image queue and the depth image queue.
def callback(self, img, depth): cv_image = None depth_image = None try: cv_image = self.bridge.imgmsg_to_cv2(img, "bgr8") except CvBridgeError as e: print(e) try: depth_image_raw = self.bridge.imgmsg_to_cv2(depth, "passthrough") # noinspection PyRedundantParentheses depth_image = ((255 * depth_image_raw)).astype(np.uint8) except CvBridgeError as e: print(e) if self.debug: print("Image has been converted.") if not self.rgb_q.empty() and not self.depth_q.empty(): if self.debug: print("Queue has item in it.") # noinspection PyBroadException try: if self.debug: print("Attempting to remove item from queue.") self.rgb_q.get(False) self.depth_q.get(False) if self.debug: print("Successfully removed images from queue.") except Exception: if self.debug: print("\033[91m" + "Exception Empty was raised. Could not remove from queue." + "\033[0m") if self.debug: print("Queue should be empty now, putting in images.") self.rgb_q.put(cv_image) self.depth_q.put(depth_image) if self.debug: print("Images are now in queue.")