一种视觉与惯性协同的室内多行人目标定位方法

A Visual-Inertial Collaborative Indoor Localization Method for Multiple Moving Pedestrian Targets

  • 摘要: 多行人目标连续定位与跟踪是大型室内空间安全防护、应急疏散、位置服务等应用领域共同关注的问题。基于固定相机的视觉监测是室内空间人流探测与行人定位的重要方式。然而现有单目视觉行人探测存在行人漏检、易受视觉盲区影响、行人身份难以确定等问题。针对这些问题,提出了一种结合视觉信息与惯性信息的主被动协同定位方法。该方法首先利用视觉行人检测算法探测视频图像中的多行人目标位置,构建像素-世界坐标转换模型,实现行人的被动视觉探测与空间定位。同时,利用智能手机惯性传感器感知行人的运动行为。在此基础上,分别利用视觉和惯性特征构建行人运动行为特征序列,通过特征序列匹配实现多目标行人的身份匹配,以及视觉和惯性信息的协同定位。实验结果表明,所提出的视觉与惯性协同定位方法能够实现多行人目标的身份匹配,协同定位平均精度约为25 cm,能够显著提升单纯视觉被动定位的连续性,减少行人漏检和视觉盲区的影响。

     

    Abstract:
      Objectives   The localization and tracking of pedestrian is a common issue for various large-scale indoor applications, such as security, emergency evacuation and location-based service. Camera-based visual monitoring is an important way to detect pedestrian flow and localize people in indoor space. However, there are several problems for directly employing state-of-the-art vision-based object recognition methods in indoor security or emergency applications, including missed detection, vulnerable to visual blind areas, difficult to globally identify pedestrian, and so on.
      Methods  We propose visual-inertial collaborative indoor localization method. Firstly, a one-stage object recognition model is used to detect pedestrian from camera video frames. Then, to achieve a passive visual localization, a coordinate transformation model is built to convert the coordinates of pedestrian from a pixel coordinate system to a world coordinate system. Simultaneously, it employs inertial data (from smartphone sensors taken by pedestrian) to sense and detect the motion characteristics of pedestrian. Finally, a motion feature vector of pedestrian is defined, which can be constructed by both visual and inertial localization results.
      Results  The experiment results show that the identity of pedestrian can be successfully recognized using the proposed method.By matching visual and inertial based motion feature vector, this method can determine the identity of pedestrian in indoor space and achieve visual-inertial collaborative localization.The average localization accuracy of the collaborative localization method is about 25 cm.
      Conclusions  Furthermore, this method can increase the spatial continuity of indoor visual localization, and reduce the negative influence of missed detection and visual blind areas on indoor localization.

     

/

返回文章
返回