基于极线及共面约束条件的Kinect点云配准方法

Kinect Point Cloud Registration Method Based on Epipolar and Point-to-Plane Constraints

  • 摘要: Kinect作为轻量级手持传感器,在室内场景恢复与模型重建中具有灵活、高效的特点。不同于大多数只基于彩色影像或只基于深度影像的重建算法,提出一种将彩色影像与深度影像相结合的点云配准算法并用于室内模型重建恢复,其过程包括相邻帧数据的配准与整体优化。在Kinect已被精确标定的基础上,将彩色影像匹配得到的同名点构成极线约束与深度图像迭代最近点配准的点到面约束相结合,以提高相邻帧数据配准算法的精度与鲁棒性。利用相邻4帧数据连续点共面约束,对相邻帧数据配准结果进行全局优化,以提高模型重建的精度。在理论分析基础上,通过实验验证了该算法在Kinect Fusion无法实现追踪、建模的场景中鲁棒性依然较好,点云配准及建模精度符合Kinect观测精度。

     

    Abstract: Most model reconstruction and indoor scene recovery methods with Kinect, involve either depth or color images, and merely combine them. These approaches do not make full use of Kinect data, and are not robust and accurate enough in many use cases. In order to solve this problem, this paper proposes a new method in which Kinect is calibrated and epipolar constraints from matching the sequence of color images are combined with point-to-plane constraints in ICP registration to improve the accuracy and robustness. As ICP registers clouds frame by frame, the error will inevitably accumulate. So a four-points coplanar method is applied to optimize the Kinect position and orientation, and make the reconstructed model more precise. Model and indoor scene experiments were designed to demonstrate that the proposed method is effective. Results show that it is more robust, even in a scene that KinectFusion fails at tracking and modeling. The registration accuracy of point clouds accords with Kinect observation accuracy.

     

/

返回文章
返回