Abstract:
Objectives The visual global positioning based on smartphone is a research hotspot in location community services. The existing methods suffer from the problems of poor reliability and low computing efficiency especially when they are used for large indoor environments such as airport and shopping mall.
Methods This paper proposes a two-level localization method including rough localization and accurate localization, which is based on 3D real map and applied to large indoor scenes such as shopping mall. To reduce the location computing time, this paper proposes a method to limit the scope of image database. Wi-Fi fingerprint matching algorithm is used to obtain the location results, and then limit the image database. In order to improve the positioning accuracy, a new method of constructing database is proposed. The whole scene is divided into multiple regions, and each region completes the database establishment independently and splices different databases. In order to reduce the location error, a scene recognition method is proposed. Deep learning method is used to remove the ceiling images and reduce the matching errors of feature points.
Results By comparing the location computing time before and after limiting the scope of image database, the proposed method improves the positioning precision from 1.89 m to 0.45 m, and reduces the positioning time from 6.113 s to 0.827 s per image.
Conclusions The proposed method achieves sub-meter accuracy of indoor vision global positioning, while the feature points matching errors affect the positioning precision. In the future work, feature lines will be used to improve the positioning accuracy.