In order to register the Kinects with KUKA robot which is fixed on a optical table, I use three color markers whose position is known within KUKA coordinate system.
Here is a snapshot of the markers:
|
Figure 1 Three colour markers with three different colours (R, G, B) |
As shown in the Figure 1, three markers are painted as red, green and blue respectively. These three markers are read as RGB point cloud from the Kinect after combining depth image with colour image. However, similar colours describing by RGB components are not easy to distinguish with each other if environment illumination is variant from time to time, therefore, difficult to choose thresholds of colour components for point cloud filtering.
To address above weakness, instead of using RGB colour space, we can use HSV (Hue, Saturation and Value) space for colour representation. The following figure shows that how value of the hue, saturation and value components can be changed.
Hue: 0 - 360 (
°)
Saturation: 0 - 100 (%)
Value: 0 - 100 (%)
|
Figure 2 HSV colour space can be visualised as a single cone [1] |
For my markers, I used following conditions to extract them out:
RED:
if (HSV_map_->points[i].s <= 0.2 || HSV_map_->points[i].v <= 0.2)
// Not target colour
if (HSV_map_->points[i].s <= 0.4 && HSV_map_->points[i].v <= 0.4)
// Not target colour
if (HSV_map_->points[i].h >= 350 || HSV_map_->points[i].h < 10)
// Can be target colour
GREEN:
if (HSV_map_->points[i].s <= 0.3 || HSV_map_->points[i].v <= 0.3)
// Not target colour
if (HSV_map_->points[i].h >= 100 && HSV_map_->points[i].h < 140)
// Can be target colour
BLUE:
if (HSV_map_->points[i].s <= 0.3 || HSV_map_->points[i].v <= 0.3)
// Not target colour
if (HSV_map_->points[i].h >= 220 && HSV_map_->points[i].h < 260)
// Can be target colour
After the potential target colour points are extracted, we need to filter out the outliers caused by noises. Here we need to have an assumption that the marker is the largest objects (most number of points) with that specific colour within field of view. Hence, it is able to use Euclidean Cluster Extraction algorithm implemented in PCL (
link) to get the largest cluster which hopefully is our marker. Since the marker has circular shape, mean value of the extracted points can be treated as centre of the marker.
Based on the calculation so far, it is able to get position three points in 3D space in both coordinates. The transformation (only rotation + rotation) between two coordinates is able to calculated using PCL function pcl::registration::TransformationEstimationSVD::estimateRigidTransformation (link).
Results:
Experiment 1: Three markers are placed on a flat table. First coordinate is same as Kinect coordinate. The origin of second coordinate is defined as centre of the red marker while red-green is x-axis and red-blue is y axis. Since right hand coordinate was used, the z axis of the second coordinate is upward out of table. The lengths of red-green line and red-blue line are both 30 cm measured by ruler.
|
Figure 3 Result of experiment 1 |
From the Figure 3, we can see that the Kinect coordinate was almost perfectly aligned with the coordinate defined by ourself.
Experiment 2: In this experiment, I tried to align two Kinects using this method. The process is basically identical except that the extra Kinect introduces one more time colour detection and extraction process.
From the Figure 4, we can see that the result is quite good as the top side of the box is aligned without any visible error.
|
Figure 4 Animation that shows good alignment of point clouds from two Kinects |
Reference:
[1] Microsoft Dev Centre - Color :
link