28 July 2013

Circular trajectory based on Start and End point

By knowing start and end point of a trajectory, we want to create a circular trajectory which can avoid body and provide smooth path for robot navigation.


Middle Point Generation:

A circle is unique if and only if three points are known, therefore, we need to calculate the middle point between the start and end point. As shown in Figure 1, we have two points P1 and P4 on a plane which is assumed to be parallel to table. In order to find the middle point, we artificially create two points P2 and P3 which have same x-axis offset to P4 and P1 respectively. Next, we connect P1 with P2 and P4 with P3 by which two lines are intersected. From the intersection point, we can estimate its normal and therefore create our middle point by extending the point at normal direction. Note that the normal must be upward so the trajectory will cross over body rather then go under table.
Figure 1 schematic of middle point generation based on two end points
Circle calculation (centre and radius):
Let's label the start, middle and end point to 0, 1 and 2. Here is snippet for the circle calculation:
 /*
  Circle calculation algorithm refer to:
  Implementation of arc interpolation in a robot by using three arbitrary points (Chinese), Ye Bosheng, 2007
 */
 float a00, a01, a02, a10, a11, a12, a20, a21, a22, b0, b1, b2;

 // Matrix A (3x3)
 a00 = 2 * (Cstart_[0] - Cmiddle_[0]); // 2 * (X0 - X1)
 a01 = 2 * (Cstart_[1] - Cmiddle_[1]); // 2 * (Y0 - Y1)
 a02 = 2 * (Cstart_[2] - Cmiddle_[2]); // 2 * (Z0 - Z1)

 a10 = 2 * (Cmiddle_[0] - Cend_[0]);  // 2 * (X1 - X2)
 a11 = 2 * (Cmiddle_[1] - Cend_[1]);  // 2 * (Y1 - Y2)
 a12 = 2 * (Cmiddle_[2] - Cend_[2]);  // 2 * (Z1 - Z2)
 
 a20 = a02 * (a02 * a11 - a01 * a12) / 8;
 a21 = a02 * (a00 * a12 - a02 * a10) / 8;
 a22 = - (a00 * (a02 * a11 - a01 * a12) + a01 * (a00 * a12 - a02 * a10)) / 8;

 // Matrix B (3x1)
 // b0 = (X0^2 + Y0^2 + Z0^2) - (X1^2 + Y1^2 + Z1^2)
 b0 = (Cstart_[0]*Cstart_[0] + Cstart_[1]*Cstart_[1] + Cstart_[2]*Cstart_[2])
  - (Cmiddle_[0]*Cmiddle_[0] + Cmiddle_[1]*Cmiddle_[1] + Cmiddle_[2]*Cmiddle_[2]);

 // b1 = (X1^2 + Y1^2 + Z1^2) - (X2^2 + Y2^2 + Z2^2)
 b1 = (Cmiddle_[0]*Cmiddle_[0] + Cmiddle_[1]*Cmiddle_[1] + Cmiddle_[2]*Cmiddle_[2])
  - (Cend_[0]*Cend_[0] + Cend_[1]*Cend_[1] + Cend_[2]*Cend_[2]);

 // b2 = a20*X0 + a21*Y0 + a22*Z0
 b2 = a20 * Cstart_[0] + a21 * Cstart_[1] + a22 * Cstart_[2];

 Eigen::Matrix3f A;
 Eigen::Vector3f B;

 A << a00, a01, a02, 
   a10, a11, a12,
   a20, a21, a22; 
 B << b0, b1, b2;

 /* 
  Centre of circle [Xc Yc Zc] can be calculated as
  A * [Xc Yc Zc]' = B;
  For Eigen library solving Ax = b, refer to:
  http://eigen.tuxfamily.org/dox/TutorialLinearAlgebra.html
 */
 Ccentre_= A.colPivHouseholderQr().solve(B);
 
 /*
  Radius of circle can be calculated as:
  R = [(X0 - Xc)^2 + (Y0 - Yc)^2 + (Z0 - Zc)^2]^0.5
 */
 float temp_equa;
 temp_equa = (Cstart_[0] - Ccentre_[0]) * (Cstart_[0] - Ccentre_[0]);
 temp_equa += (Cstart_[1] - Ccentre_[1]) * (Cstart_[1] - Ccentre_[1]);
 temp_equa += (Cstart_[2] - Ccentre_[2]) * (Cstart_[2] - Ccentre_[2]);
 radius_ = sqrt(temp_equa);

Trajectory interpolation:
Here is the snippet for determining central angle:
 /*
  normal of circle plane
  u = (Y1 - Y0)(Z2 - Z1) - (Z1 - Z0)(Y2 - Y1)
  v = (Z1 - Z0)(X2 - X1) - (X1 - X0)(Z2 - Z1)
  w = (X1 - X0)(Y2 - Y1) - (Y1 - Y0)(X2 - X1)
 */
 Cnu_ = (Cmiddle_[1] - Cstart_[1]) * (Cend_[2] - Cmiddle_[2]) - (Cmiddle_[2] - Cstart_[2]) * (Cend_[1] - Cmiddle_[1]);
 Cnv_ = (Cmiddle_[2] - Cstart_[2]) * (Cend_[0] - Cmiddle_[0]) - (Cmiddle_[0] - Cstart_[0]) * (Cend_[2] - Cmiddle_[2]);
 Cnw_ = (Cmiddle_[0] - Cstart_[0]) * (Cend_[1] - Cmiddle_[1]) - (Cmiddle_[1] - Cstart_[1]) * (Cend_[0] - Cmiddle_[0]);

 // normal vector to determine central angle
 float u1, v1, w1;
 u1 = (Cstart_[1] - Ccentre_[1]) * (Cend_[2] - Cstart_[2]) - (Cstart_[2] - Ccentre_[2]) * (Cend_[1] - Cstart_[1]);
 v1 = (Cstart_[2] - Ccentre_[2]) * (Cend_[0] - Cstart_[0]) - (Cstart_[0] - Ccentre_[0]) * (Cend_[2] - Cstart_[2]);
 w1 = (Cstart_[0] - Ccentre_[0]) * (Cend_[1] - Cstart_[1]) - (Cstart_[1] - Ccentre_[1]) * (Cend_[0] - Cstart_[0]);

 float H, temp;
 H = Cnu_ * u1 + Cnv_ * v1 + Cnw_ * w1;
 temp = (Cend_[0] - Cstart_[0]) * (Cend_[0] - Cstart_[0]) + 
   (Cend_[1] - Cstart_[1]) * (Cend_[1] - Cstart_[1]) +
   (Cend_[2] - Cstart_[2]) * (Cend_[2] - Cstart_[2]);
 if (H >= 0) // theta <= PI
 {
  Ctheta = 2 * asin(sqrt(temp) / (2 * radius_));
 } 
 else  // theta > PI
 {
  Ctheta = 2 * M_PI - 2 * asin(sqrt(temp) / (2 * radius_));
 }

Here is the snippet for calculation of number of points to be interpolated:

int CircleGenerator::CalcNbOfPts(const float &step)
{
 float temp;
 temp = 1 + (step / radius_) * (step / radius_);
 paramG_ = 1 / sqrt(temp);

 temp = Cnu_*Cnu_ + Cnv_*Cnv_ + Cnw_*Cnw_;
 paramE_ = step / (radius_ * sqrt(temp));

 float sigma;
 sigma = step / radius_;

 int N;
 N = (int)(Ctheta / sigma) + 1;

 return (N);
}

Here is the snippet for calculating next point in the trajectory:

 traj->push_back(vector2pcl(circle_->Cstart_));
 for (int i = 1; i < numOfPt; i++)
 {
  /*
   Equation 8:
   mi = v(Zi - Zc) - w(Yi - Yc)
   ni = w(Xi - Xc) - u(Zi - Zc)
   li = u(Yi - Yc) - v(Xi - Xc)
  */
  if (i == 1)
  {
   pre = circle_->Cstart_;
  }
  float mi, ni, li;
  mi = circle_->Cnv_ * (pre[2] - circle_->Ccentre_[2]) - circle_->Cnw_ * (pre[1] - circle_->Ccentre_[1]);
  ni = circle_->Cnw_ * (pre[0] - circle_->Ccentre_[0]) - circle_->Cnu_ * (pre[2] - circle_->Ccentre_[2]);
  li = circle_->Cnu_ * (pre[1] - circle_->Ccentre_[1]) - circle_->Cnv_ * (pre[0] - circle_->Ccentre_[0]);
  
  /*
   X(i+1) = Xc + G * (X(i) + E*mi - Xc)
   Y(i+1) = Yc + G * (Y(i) + E*ni - Yc)
   Z(i+1) = Zc + G * (Z(i) + E*li - Zc)
  */
  curr[0] = circle_->Ccentre_[0] + circle_->paramG_ * (pre[0] + circle_->paramE_ * mi - circle_->Ccentre_[0]);
  curr[1] = circle_->Ccentre_[1] + circle_->paramG_ * (pre[1] + circle_->paramE_ * ni - circle_->Ccentre_[1]);
  curr[2] = circle_->Ccentre_[2] + circle_->paramG_ * (pre[2] + circle_->paramE_ * li - circle_->Ccentre_[2]);
  traj->push_back(vector2pcl(curr));
  pre = curr;
 }
 traj->push_back(vector2pcl(circle_->Cend_));







23 July 2013

Point cloud normal estimation and Correspondence based on Normal shooting

Souce cloud: shape model
Target cloud: point cloud from Kinect
Normal cloud: a cloud of normal for shape model cloud

Step 1: Normal estimation based on point cloud itself
Step 2: Figure out the normal direction based on the view port position
Step 3: Find the correspondence in target cloud for each point in source cloud (shape model)

Results:
Figure 1 Red points: shape model cloud; Blue line: normal starts from point; colorful point: cloud from Kinect

Figure 2 Side view of Figure 1.

Figure 3 Top view of Figure 1.
Figure 4 Green points are the correspondences obtained from normal shooting






19 July 2013

Calculating Euler angles from a rotation matrix



For rotation with inverse order Rzxy, refer to here.

Finding transformation between corresponding 3D points

With the shape model registered with point cloud, we need to align it with the mean model so that they are in same reference frame and comparable.
The transformation between shape model and mean model includes translation, scaling and rotation.

Translation:
In order to make two shape comparable, I translate both registered shape model and mean model to origin.

Scaling:
A simple way is to normalize the shape itself which results in a scaled shape of size 1. The size of shape is calculated by Opencv cv::norm() which computes an absolute array norm based on following equation.
In my program I use NORM_L2.

Rotation:
Now the registered shape model and the mean shape are almost aligned except the rotation. As the correspondences between two shape are known information, the rotation matrix can be solved using SVD (singular vector decomposition) which is introduced here.
Firstly, we need to decompose matrix H which is:
As the centroid of two shape has been subtracted already in the translation part, we can just put PA and PB as our two shapes. Next, the SVD of H is:
Finally, the rotation matrix from shape A to shape B can be calculated using U and V:
If we want to get rotation from shape B to A, just alter the equation to PB*PA instead of PA*PB.


Reference:
[1] Finding optimal rotation and translation between corresponding 3D points
[2] OpenCV document

16 July 2013

Registration using Normal Distribution Transformation

In this post, I am going to experiment the PCL Normal Distribution Transformation (NDT) algorithm for registration of two clouds.

Experiment description:

Two Kinects are oppositely located and both facing to a box of size ~18x40cm on a table. One Kinect is called target and the other is called source. Therefore, our aim is to register source cloud with target cloud, or in other word find a transformation that best align source cloud and target cloud.
Both source and target clouds have been cropped based on region of interest for faster processing and fewer background noise. In addition, source cloud (coloured with green) has been down-sampled to 1/10 of its original density.

As mentioned at PCL official tutorial (link), these parameters are set according to different application:

The Transformation Epsilon(TE) parameter defines minimum, allowable, incremental change of the transformation vector, [x, y, z, roll, pitch, yaw] in meters and radians respectively. Once the incremental change dips below this threshold, the alignment terminates.

The Step Size(SS) parameter defines the maximum step length allowed by the More-Thuente line search. This line search algorithm determines the best step length below this maximum value, shrinking the step length as you near the optimal solution. Larger maximum step lengths will be able to clear greater distances in fewer iterations but run the risk of overshooting and ending up in an undesirable local minimum.


The Resolution parameter(RP) defines the voxel resolution of the internal NDT grid structure. This structure is easily searchable and each voxel contain the statistical data, mean, covariance, etc., associated with the points it contains.
I've adjusted these parameters individually and cross compared the results of registration.

TE: 0.01, SS:0.01, RP:0.1
TE: 0.01, SS:0.01, RP:0.1
TE: 0.01, SS:0.01, RP:0.2
TE: 0.01, SS:0.01, RP:0.2
TE: 0.01, SS:0.01, RP:0.3
TE: 0.01, SS:0.01, RP:0.3
TE: 0.01, SS:0.05, RP:0.1
TE: 0.01, SS:0.05, RP:0.1
TE: 0.01, SS:0.05, RP:0.3
TE: 0.01, SS:0.05, RP:0.3
From above results we can conclude:
1) If the clouds were jointing together rather than aligning, we can increase the RP in order to include more points for each voxel structure.
2) For large SS parameter, the registration converge faster while small SS parameter gives finer alignment.

At the end of experiment, I found that the algorithm is more robust to regular object in the scene. Since the above experiment all includes my head during registration. If I let myself out of the scene, the registration can be done much better and faster for reasonable parameters. Here is the result without my head in the scene:
TE: 0.01, SS:0.01, RP:0.2
TE: 0.01, SS:0.01, RP:0.2



Reference
[1] Merten, H. (Ed.). (2008). The Three-Dimensional Normal-Distributions Transform. threshold10, 3.

02 July 2013

VOSM installation in VS2010 NO CMAKE

This post is just for own reference only.

Step 1: Download vosm for windows (link)

Step 2: Unzip

Step 3: Open vosm.sln

Step 4: For all projects (comalgs; common; cvcommon; ensembletraining; featureextraction; integraltransform; smbuilding; smfitting; test_ensembletraining; test_featureextraction; test_integraltransform; test_smbuilding; test_smfitting)
configure boost and opencv lib and include directory for both DEBUG and RELEASE mode. Keep additional DIR underC/C++-General-Additional Include Directoriesin RELEASE mode same as DEBUG mode.

Step 5: For project comalgs; common; cvcommon; ensembletraining; featureextraction; integraltransform; smbuilding; smfitting, change configuration type under Project Defaults-Configuration Type-Additional Dependencies】to Static library (.lib)

Step 6: For projects test_ensembletraining; test_featureextraction; test_integraltransform; test_smbuilding; test_smfitting, add .lib file name of opencv and boost library correctlyLinker-Input-Additional Dependencies】in both DEBUG and RELEASE mode.
Boost Debug:
libboost_regex-vc100-mt-gd-1_49.lib
libboost_filesystem-vc100-mt-gd-1_49.lib
libboost_system-vc100-mt-gd-1_49.lib
Boost Release:
libboost_regex-vc100-mt-1_49.lib
libboost_filesystem-vc100-mt-1_49.lib
libboost_system-vc100-mt-1_49.lib

Step 7: UnderLinker-General-Additional Library Directories】, input ../../Debug and ../../Release for DEBUG and RELEASE mode respectively.

Step 8: Compile all projects (modules then tests) in both Debug and Release mode.