In order to manipulate multiple Kinects easier, I create a class called DepthSensor which can be used to retrieve OpenCV image directly. Figure 1 shows a UML class diagram for DepthSensor class. I put Kinect initialisation in method init() whose input argument is a index indicating which Kinect is being initialised. It also handles the error if the index exceeds the maximum number of detected Kinect to computer. Before each time we want to call getColorImg() or getDepthImg(), processColorFrame() or processDepthFrame() need to be called. In processColorFrame() and processDepthFrame() method, I retrieve corresponding frame from corresponding stream and convert it into OpenCV image format (IplImage).
Figure 1 UML of class DepthSensor |
The main running program can be as simple as:
int main(int argc, char* argv[]) { IplImage *image1, *image2; DepthSensor kinect1, kinect2; if(!kinect1.init(1)) { NuiShutdown(); return 1; } if(!kinect2.init(2)) { NuiShutdown(); return 1; } while (1) { if (cvWaitKey(30) == 27) break; else { kinect1.processDepthFrame(); kinect2.processDepthFrame(); kinect1.processColorFrame(); kinect2.processColorFrame(); image1 = &kinect1.getColorImg(); image2 = &kinect2.getColorImg(); cvShowImage("Color Kinect1", image1); cvShowImage("Color Kinect2", image2); image1 = &kinect1.getDepthImg(); image2 = &kinect2.getDepthImg(); cvShowImage("Depth Kinect1", image1); cvShowImage("Depth Kinect2", image2); } } NuiShutdown(); return 0; }
The results are shown as:
Figure 2 Two Kinect colour/ depth display with OpenCV |
In the following days, I will continue implement DepthSensor class so that it is able to manipulate 3D point clouds data.
No comments:
Post a Comment