Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints
-
摘要: Kinect作为轻量级手持传感器,在室内场景恢复与模型重建中具有灵活、高效的特点。不同于大多数只基于彩色影像或只基于深度影像的重建算法,提出一种将彩色影像与深度影像相结合的点云配准算法并用于室内模型重建恢复,其过程包括相邻帧数据的配准与整体优化。在Kinect已被精确标定的基础上,将彩色影像匹配得到的同名点构成极线约束与深度图像迭代最近点配准的点到面约束相结合,以提高相邻帧数据配准算法的精度与鲁棒性。利用相邻4帧数据连续点共面约束,对相邻帧数据配准结果进行全局优化,以提高模型重建的精度。在理论分析基础上,通过实验验证了该算法在Kinect Fusion无法实现追踪、建模的场景中鲁棒性依然较好,点云配准及建模精度符合Kinect观测精度。Abstract: Most model reconstruction and indoor scene recovery methods with Kinect, involve either depth or color images, and merely combine them. These approaches do not make full use of Kinect data, and are not robust and accurate enough in many use cases. In order to solve this problem, this paper proposes a new method in which Kinect is calibrated and epipolar constraints from matching the sequence of color images are combined with point-to-plane constraints in ICP registration to improve the accuracy and robustness. As ICP registers clouds frame by frame, the error will inevitably accumulate. So a four-points coplanar method is applied to optimize the Kinect position and orientation, and make the reconstructed model more precise. Model and indoor scene experiments were designed to demonstrate that the proposed method is effective. Results show that it is more robust, even in a scene that KinectFusion fails at tracking and modeling. The registration accuracy of point clouds accords with Kinect observation accuracy.
-
Keywords:
- Kinect /
- epipolar constraints /
- point-to-plane constraints /
- four-points coplanar
-
-
表 1 靶标点均方差
Table 1 The MSE of Target Point
参数 P1 P2 P3 连续影像数 9 10 8 平均深度/m 1.10 1.51 1.42 坐标均方差/mm 11.4 15.2 14.8 -
[1] Zhang Z.Microsoft Kinect Sensor and Its Effect[J]. IEEE MultiMedia, 2012, 19(2):4-10 doi: 10.1109/MMUL.2012.24
[2] MacCormick J.How Does The Kinect Work?[OL].http://users.dickinson.edu/~jmac/selected-talks/kinect.pdf, 2011
[3] Han J, Shao L, Xu D, et al. Enhanced Computer Vision with Microsoft Kinect Sensor:A Review[J]. IEEE Transactions on Cybernetics, 2013, 43(5):1318-1334 doi: 10.1109/TCYB.2013.2265378
[4] Ganganath N, Leung H.Mobile Robot Localization Using Odometry and Kinect Sensor[C].Emerging Signal Processing Applications (ESPA), IEEE International Conference on, Las Vegas, USA, 2012
[5] Correa D S O, Sciotti D F, Prado M G, et al. Mobile Robots Navigation in Indoor Environments Using Kinect Sensor[C].Critical Embedded Systems (CBSEC), Second Brazilian Conference on, Brazil, Campinas, 2012
[6] Newcombe R A, Lzadi S, Hilliges O, et al. KinectFusion:Real-Time Dense Surface Mapping and Tracking[C].Mixed and Augmented Reality (ISMAR), 10th IEEE International Symposium on, Basel, Switzerland, 2011
[7] Schenker P S.Method for Registration of 3D Shapes[C]. Robotics-DL Tentative. International Society for Optics and Photonics, Tokyo, Japan, 1992
[8] Zheng Shuai, Hong Jun, Zhang Kang, et al.A Multi-frame Graph Matching Algorithm for Low-Band Width RGB-DSLAM[J].Computer -Aided Design, 2016, 1(1):90-103
[9] Wang Yue, Huang Shoudong, Xiong Rong, et al. A Framework for Multi-session RGBD SLAM in Low Dynamic Workspace Environment[C].CAAI Transactions on Intelligence Technology, Beijing, China, 2016
[10] 叶勤, 桂坡坡.一种新的深度传感器内部参数标定方法研究[J].光电子激光, 2015, 26(6):1146-1151 http://www.cnki.com.cn/Article/CJFDTOTAL-GDZJ201506021.htm Ye Qin, Gui Popo.A New Calibration Method for Depth Sensor[J]. Journal of Optoelectronics Laser, 2015, 26(6):1146-1151 http://www.cnki.com.cn/Article/CJFDTOTAL-GDZJ201506021.htm
[11] Zhang Z. A Flexible New Technique for Camera Calibration[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334 doi: 10.1109/34.888718
[12] Abdel-Hakim A E, Farag A A. CSIFT:A SIFT Descriptor with Color Invariant Characteristics[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New York, USA, 2006
[13] Bay H, Tuytelaars T, Gool L V. Surf:Speeded up Robust Features[C]. European Conference on Computer Vision, Heidelberg, Berlin, German, 2006