Abstract:
Objectives The localization and tracking of pedestrian is a common issue for various large-scale indoor applications, such as security, emergency evacuation and location-based service. Camera-based visual monitoring is an important way to detect pedestrian flow and localize people in indoor space. However, there are several problems for directly employing state-of-the-art vision-based object recognition methods in indoor security or emergency applications, including missed detection, vulnerable to visual blind areas, difficult to globally identify pedestrian, and so on.
Methods We propose visual-inertial collaborative indoor localization method. Firstly, a one-stage object recognition model is used to detect pedestrian from camera video frames. Then, to achieve a passive visual localization, a coordinate transformation model is built to convert the coordinates of pedestrian from a pixel coordinate system to a world coordinate system. Simultaneously, it employs inertial data (from smartphone sensors taken by pedestrian) to sense and detect the motion characteristics of pedestrian. Finally, a motion feature vector of pedestrian is defined, which can be constructed by both visual and inertial localization results.
Results The experiment results show that the identity of pedestrian can be successfully recognized using the proposed method.By matching visual and inertial based motion feature vector, this method can determine the identity of pedestrian in indoor space and achieve visual-inertial collaborative localization.The average localization accuracy of the collaborative localization method is about 25 cm.
Conclusions Furthermore, this method can increase the spatial continuity of indoor visual localization, and reduce the negative influence of missed detection and visual blind areas on indoor localization.