MENG Yiyue, GUO Chi, LIU Jingnan. Deep Reinforcement Learning Visual Target Navigation Method Based on Attention Mechanism and Reward Shaping[J]. Geomatics and Information Science of Wuhan University. DOI: 10.13203/j.whugis20230193
Citation: MENG Yiyue, GUO Chi, LIU Jingnan. Deep Reinforcement Learning Visual Target Navigation Method Based on Attention Mechanism and Reward Shaping[J]. Geomatics and Information Science of Wuhan University. DOI: 10.13203/j.whugis20230193

Deep Reinforcement Learning Visual Target Navigation Method Based on Attention Mechanism and Reward Shaping

  • Objectives : As one of the important tasks of visual navigation, visual target navigation requires the agent to explore and navigate to the target and issue the done action only relying on visual image information and target information. Presently, the existing methods usually adopt deep reinforcement learning framework to solve visual target navigation problems. However, there are still some shortcomings: (1) the existing methods ignore the relationship between the state of the current and previous time step, resulting in poor navigation performance; (2) the reward settings of the existing methods are fixed and sparse. The agents cannot obtain better navigation strategies under sparse reward. To solve these problems, we propose a deep reinforcement learning visual target navigation method based on attention mechanism and reward shaping. This method can further improve the performance of visual target navigation tasks. Methods : The method obtains the area of path focused by the agent at the previous time step based on scaled dot production attention between previous visual image and action. Then, the method obtains the area of path focused by the agent at current time step based on scaled dot production attention between current visual image and previous focused area of path to introduce the state relationship. Besides, to obtain the current focused area of target, we also utilize scaled dot production attention mechanism. We concatenate the current focused area of path and target to build a better state of the agent. Additionally, we propose a reward reshaping rule to solve the problem of sparse reward and apply the cosine similarity between the visual image and target to automatically build a reward space with target preference. Finally, the attention method and reward reshaping method are combined together to form the deep reinforcement learning visual target navigation method based on attention mechanism and reward shaping. Results : We conduct experiments on AI2-THOR dataset and use success rate (SR) and success weighted by path length (SPL) to evaluate the performance of visual target navigation methods. The results indicate that our method shows 7% improvement in SR and 20% in SPL, which means that the agent can learn a better navigation strategy. In addition, the ablation study shows that the introduction of state relationship and reward shaping can both improve the navigation performance. Conclusion : To draw a conclusion, the proposed method can further improve the navigation success rate and efficiency by building better states and reward space. In the future work, we consider to use RGBD images to obtain depth information to help further optimize the navigation path. We will also focus on how to use curriculum learning and imitation learning to solve sparse reward for rapid training and better navigation strategies learning. More importantly, we will focus on migrating the agent from simulation environment to real environment for training and testing.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return