YE Qin, YAO Yahui, GUI Popo. Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints[J]. Geomatics and Information Science of Wuhan University, 2017, 42(9): 1271-1277. DOI: 10.13203/j.whugis20150362
Citation: YE Qin, YAO Yahui, GUI Popo. Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints[J]. Geomatics and Information Science of Wuhan University, 2017, 42(9): 1271-1277. DOI: 10.13203/j.whugis20150362

Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints

Funds: 

The Natural Science Foundation of Shanghai 13ZR1444300

More Information
  • Author Bio:

    YE Qin,PhD, associate professor, specializes in photogrammetry and computer vision. E-mail: yeqin@tongji.edu.cn

  • Received Date: October 16, 2016
  • Published Date: September 04, 2017
  • Most model reconstruction and indoor scene recovery methods with Kinect, involve either depth or color images, and merely combine them. These approaches do not make full use of Kinect data, and are not robust and accurate enough in many use cases. In order to solve this problem, this paper proposes a new method in which Kinect is calibrated and epipolar constraints from matching the sequence of color images are combined with point-to-plane constraints in ICP registration to improve the accuracy and robustness. As ICP registers clouds frame by frame, the error will inevitably accumulate. So a four-points coplanar method is applied to optimize the Kinect position and orientation, and make the reconstructed model more precise. Model and indoor scene experiments were designed to demonstrate that the proposed method is effective. Results show that it is more robust, even in a scene that KinectFusion fails at tracking and modeling. The registration accuracy of point clouds accords with Kinect observation accuracy.
  • [1]
    Zhang Z.Microsoft Kinect Sensor and Its Effect[J]. IEEE MultiMedia, 2012, 19(2):4-10 doi: 10.1109/MMUL.2012.24
    [2]
    MacCormick J.How Does The Kinect Work?[OL].http://users.dickinson.edu/~jmac/selected-talks/kinect.pdf, 2011
    [3]
    Han J, Shao L, Xu D, et al. Enhanced Computer Vision with Microsoft Kinect Sensor:A Review[J]. IEEE Transactions on Cybernetics, 2013, 43(5):1318-1334 doi: 10.1109/TCYB.2013.2265378
    [4]
    Ganganath N, Leung H.Mobile Robot Localization Using Odometry and Kinect Sensor[C].Emerging Signal Processing Applications (ESPA), IEEE International Conference on, Las Vegas, USA, 2012
    [5]
    Correa D S O, Sciotti D F, Prado M G, et al. Mobile Robots Navigation in Indoor Environments Using Kinect Sensor[C].Critical Embedded Systems (CBSEC), Second Brazilian Conference on, Brazil, Campinas, 2012
    [6]
    Newcombe R A, Lzadi S, Hilliges O, et al. KinectFusion:Real-Time Dense Surface Mapping and Tracking[C].Mixed and Augmented Reality (ISMAR), 10th IEEE International Symposium on, Basel, Switzerland, 2011
    [7]
    Schenker P S.Method for Registration of 3D Shapes[C]. Robotics-DL Tentative. International Society for Optics and Photonics, Tokyo, Japan, 1992
    [8]
    Zheng Shuai, Hong Jun, Zhang Kang, et al.A Multi-frame Graph Matching Algorithm for Low-Band Width RGB-DSLAM[J].Computer -Aided Design, 2016, 1(1):90-103
    [9]
    Wang Yue, Huang Shoudong, Xiong Rong, et al. A Framework for Multi-session RGBD SLAM in Low Dynamic Workspace Environment[C].CAAI Transactions on Intelligence Technology, Beijing, China, 2016
    [10]
    叶勤, 桂坡坡.一种新的深度传感器内部参数标定方法研究[J].光电子激光, 2015, 26(6):1146-1151 http://www.cnki.com.cn/Article/CJFDTOTAL-GDZJ201506021.htm

    Ye Qin, Gui Popo.A New Calibration Method for Depth Sensor[J]. Journal of Optoelectronics Laser, 2015, 26(6):1146-1151 http://www.cnki.com.cn/Article/CJFDTOTAL-GDZJ201506021.htm
    [11]
    Zhang Z. A Flexible New Technique for Camera Calibration[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334 doi: 10.1109/34.888718
    [12]
    Abdel-Hakim A E, Farag A A. CSIFT:A SIFT Descriptor with Color Invariant Characteristics[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New York, USA, 2006
    [13]
    Bay H, Tuytelaars T, Gool L V. Surf:Speeded up Robust Features[C]. European Conference on Computer Vision, Heidelberg, Berlin, German, 2006
  • Cited by

    Periodical cited type(2)

    1. 赖晓铭. 基于InSAR技术的福州市南江滨地区闽江堤岸沉降监测与分析. 测绘与空间地理信息. 2025(02): 184-187 .
    2. 欧书圆,张卫星. 顾及残差插值补偿的区域CORS对流层延迟建模研究. 测绘地理信息. 2024(05): 19-23 .

    Other cited types(0)

Catalog

    Article views PDF downloads Cited by(2)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return