面向行人导航的注视方向估计模型

Estimating Gaze Directions for Pedestrian Navigation

  • 摘要: 准确捕捉视觉注视方向可有效提高行人导航效率和安全性,由于现有注视方向估计方法无法满足行人导航对便携性的需求,由此提出了一种利用智能眼镜估计行人注视方向的模型。首先,利用尺度不变特征变换特征度量注视图像和街景视图的相似度;然后,根据行人和街景视图间的位置关系建立行人注视方向估计模型。实验选取两个校园场景中的28个测试点估计行人注视方向,结果表明,所提模型估计精度显著优于忽略位置关系的行人注视方向估计模型,且在相同场景中与行人位置变化无关。

     

    Abstract:
      Objectives  Accurately capturing gaze directions of pedestrians can effectively improve the efficiency and the safety of pedestrian navigation. Traditional methods of gaze direction estimation are somewhat invasive in many practical applications and unadapted to different users and head pose variations, and unsuitable for pedestrian navigation in terms of the portability of the corresponding devices. Accordingly, we propose a model to estimate the gaze directions of pedestrians using smart glasses.
      Methods  Based on scale invariant feature transform features, we first measure the similarity degree between gaze photos and street view images. Then we propose a method to estimate gaze directions while pedestrians and street view images are disjoint.When pedestrians and street view images are overlapped, we develop another estimation method that considers the position error of pedestrians. Furthermore, we establish a gaze-direction estimation model that considers the positional relationship between pedestrians and street view images. Finally, we select two real-world scenes to verify the reliability of the proposed estimation model of gaze directions by simulation experiments.Results: The results show that: (1) The estimation errors of the gaze directions by simulation experiments.
      Results  The results show that: (1) The estimation errors of the proposed estimation model(i.e., gaze-direction estimation model considering positional relationship) are significantly lower than that of gaze-directions estimation model ignoring positional relationship. (2) In the same scenes, the estimated error does not increase with the test distances. Moreover, the average estimation accuracy of our model is almost similar to the estimation method based on depth cameras.
      Conclusions  (1) The proposed gaze direction estimation is significantly superior to the model that ignores positional relationship.(2) Pedestrian position variations have little impact on the estimation accuracies of our model in the same scene. Hence, the proposed model is suitable for pedestrian navigation and we can use portable smart glasses to estimate gaze directions.

     

/

返回文章
返回