基于极线及共面约束条件的Kinect点云配准方法

叶勤, 姚亚会, 桂坡坡

叶勤, 姚亚会, 桂坡坡. 基于极线及共面约束条件的Kinect点云配准方法[J]. 武汉大学学报 ( 信息科学版), 2017, 42(9): 1271-1277. DOI: 10.13203/j.whugis20150362
引用本文: 叶勤, 姚亚会, 桂坡坡. 基于极线及共面约束条件的Kinect点云配准方法[J]. 武汉大学学报 ( 信息科学版), 2017, 42(9): 1271-1277. DOI: 10.13203/j.whugis20150362
YE Qin, YAO Yahui, GUI Popo. Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints[J]. Geomatics and Information Science of Wuhan University, 2017, 42(9): 1271-1277. DOI: 10.13203/j.whugis20150362
Citation: YE Qin, YAO Yahui, GUI Popo. Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints[J]. Geomatics and Information Science of Wuhan University, 2017, 42(9): 1271-1277. DOI: 10.13203/j.whugis20150362

基于极线及共面约束条件的Kinect点云配准方法

基金项目: 

上海市自然科学基金 13ZR1444300

详细信息
    作者简介:

    叶勤, 博士, 副教授, 主要从事数字摄影测量、计算机视觉的理论与方法研究。yeqin@tongji.edu.cn

  • 中图分类号: P234

Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints

Funds: 

The Natural Science Foundation of Shanghai 13ZR1444300

More Information
    Author Bio:

    YE Qin,PhD, associate professor, specializes in photogrammetry and computer vision. E-mail: yeqin@tongji.edu.cn

  • 摘要: Kinect作为轻量级手持传感器,在室内场景恢复与模型重建中具有灵活、高效的特点。不同于大多数只基于彩色影像或只基于深度影像的重建算法,提出一种将彩色影像与深度影像相结合的点云配准算法并用于室内模型重建恢复,其过程包括相邻帧数据的配准与整体优化。在Kinect已被精确标定的基础上,将彩色影像匹配得到的同名点构成极线约束与深度图像迭代最近点配准的点到面约束相结合,以提高相邻帧数据配准算法的精度与鲁棒性。利用相邻4帧数据连续点共面约束,对相邻帧数据配准结果进行全局优化,以提高模型重建的精度。在理论分析基础上,通过实验验证了该算法在Kinect Fusion无法实现追踪、建模的场景中鲁棒性依然较好,点云配准及建模精度符合Kinect观测精度。
    Abstract: Most model reconstruction and indoor scene recovery methods with Kinect, involve either depth or color images, and merely combine them. These approaches do not make full use of Kinect data, and are not robust and accurate enough in many use cases. In order to solve this problem, this paper proposes a new method in which Kinect is calibrated and epipolar constraints from matching the sequence of color images are combined with point-to-plane constraints in ICP registration to improve the accuracy and robustness. As ICP registers clouds frame by frame, the error will inevitably accumulate. So a four-points coplanar method is applied to optimize the Kinect position and orientation, and make the reconstructed model more precise. Model and indoor scene experiments were designed to demonstrate that the proposed method is effective. Results show that it is more robust, even in a scene that KinectFusion fails at tracking and modeling. The registration accuracy of point clouds accords with Kinect observation accuracy.
  • 图  1   Kinect坐标系示意图

    Figure  1.   Kinect Coordinate System

    图  2   四点共面示意图

    Figure  2.   Four-Points Coplanar

    图  3   彩色影像与点云数据

    Figure  3.   Color Image and Point Cloud

    图  4   两帧影像直接叠加效果

    Figure  4.   Two Point Cloud Directly Overlapping Display

    图  5   配准后的点云叠加图

    Figure  5.   Point Cloud Overlapping Display After Registration

    图  6   10帧影像点云叠加图

    Figure  6.   Point Cloud Overlapping of Ten Frames

    图  7   点云配准叠加图

    Figure  7.   Point Cloud Overlapping After Registration

    图  8   机房场景恢复建模结果

    Figure  8.   Office Scene Recovery Result

    图  9   两种方法对比图

    Figure  9.   Results Comparison Between Kinect Fusion and the Proposed Method

    表  1   靶标点均方差

    Table  1   The MSE of Target Point

    参数 P1 P2 P3
    连续影像数9108
    平均深度/m1.101.511.42
    坐标均方差/mm11.415.214.8
    下载: 导出CSV
  • [1]

    Zhang Z.Microsoft Kinect Sensor and Its Effect[J]. IEEE MultiMedia, 2012, 19(2):4-10 doi: 10.1109/MMUL.2012.24

    [2]

    MacCormick J.How Does The Kinect Work?[OL].http://users.dickinson.edu/~jmac/selected-talks/kinect.pdf, 2011

    [3]

    Han J, Shao L, Xu D, et al. Enhanced Computer Vision with Microsoft Kinect Sensor:A Review[J]. IEEE Transactions on Cybernetics, 2013, 43(5):1318-1334 doi: 10.1109/TCYB.2013.2265378

    [4]

    Ganganath N, Leung H.Mobile Robot Localization Using Odometry and Kinect Sensor[C].Emerging Signal Processing Applications (ESPA), IEEE International Conference on, Las Vegas, USA, 2012

    [5]

    Correa D S O, Sciotti D F, Prado M G, et al. Mobile Robots Navigation in Indoor Environments Using Kinect Sensor[C].Critical Embedded Systems (CBSEC), Second Brazilian Conference on, Brazil, Campinas, 2012

    [6]

    Newcombe R A, Lzadi S, Hilliges O, et al. KinectFusion:Real-Time Dense Surface Mapping and Tracking[C].Mixed and Augmented Reality (ISMAR), 10th IEEE International Symposium on, Basel, Switzerland, 2011

    [7]

    Schenker P S.Method for Registration of 3D Shapes[C]. Robotics-DL Tentative. International Society for Optics and Photonics, Tokyo, Japan, 1992

    [8]

    Zheng Shuai, Hong Jun, Zhang Kang, et al.A Multi-frame Graph Matching Algorithm for Low-Band Width RGB-DSLAM[J].Computer -Aided Design, 2016, 1(1):90-103

    [9]

    Wang Yue, Huang Shoudong, Xiong Rong, et al. A Framework for Multi-session RGBD SLAM in Low Dynamic Workspace Environment[C].CAAI Transactions on Intelligence Technology, Beijing, China, 2016

    [10] 叶勤, 桂坡坡.一种新的深度传感器内部参数标定方法研究[J].光电子激光, 2015, 26(6):1146-1151 http://www.cnki.com.cn/Article/CJFDTOTAL-GDZJ201506021.htm

    Ye Qin, Gui Popo.A New Calibration Method for Depth Sensor[J]. Journal of Optoelectronics Laser, 2015, 26(6):1146-1151 http://www.cnki.com.cn/Article/CJFDTOTAL-GDZJ201506021.htm

    [11]

    Zhang Z. A Flexible New Technique for Camera Calibration[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334 doi: 10.1109/34.888718

    [12]

    Abdel-Hakim A E, Farag A A. CSIFT:A SIFT Descriptor with Color Invariant Characteristics[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New York, USA, 2006

    [13]

    Bay H, Tuytelaars T, Gool L V. Surf:Speeded up Robust Features[C]. European Conference on Computer Vision, Heidelberg, Berlin, German, 2006

图(9)  /  表(1)
计量
  • 文章访问数:  1826
  • HTML全文浏览量:  120
  • PDF下载量:  428
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-10-16
  • 发布日期:  2017-09-04

目录

    /

    返回文章
    返回