Deep Reinforcement Learning for Autonomous Navigation in GPS-Denied Environments: A Comprehensive Review

S. Ganesh Kumar

Abstract


Autonomous navigation in GPS-denied environments represents a critical challenge for unmanned systems operating in contested, indoor, underground, and extraterrestrial domains. This comprehensive review examines the state-of-the-art deep reinforcement learning (DRL) approaches that enable robust navigation without reliance on global positioning systems. The paper systematically analyzes the evolution of DRL algorithms from value-based methods (DQN, Rainbow) to policy gradient approaches (PPO, SAC, TD3) and their applications in vision-based, LiDAR-based, and multi-modal navigation frameworks. We critically evaluate the performance of model-free versus model-based DRL techniques across diverse operational scenarios including urban canyons, dense forests, subsurface environments, and planetary exploration. The review identifies key architectural innovations such as attention mechanisms, recurrent neural networks for temporal reasoning, and hierarchical reinforcement learning structures that enhance navigation robustness. Particular emphasis is placed on sample efficiency challenges, sim-to-real transfer gaps, and safety guarantees during learning and deployment phases. We analyze benchmark datasets, simulation environments, and evaluation metrics used in the literature, highlighting inconsistencies that hinder comparative assessment. The paper also examines hybrid approaches combining DRL with classical path planning, SLAM integration strategies, and multi-agent coordination in GPS-denied scenarios. Emerging trends including meta-learning for rapid adaptation, curiosity-driven exploration, and physics-informed neural networks are discussed as promising directions. This review concludes by identifying critical research gaps including generalization across environmental conditions, computational efficiency for embedded deployment, and formal verification methods for safety-critical applications, providing a roadmap for future investigations in DRL-based autonomous navigation.

Keywords


deep reinforcement learning, GPS-denied navigation, autonomous systems, vision-based navigation.

References


Aburaya, Anas, Hazlina Selamat, and Mohd Taufiq Muslim. “Review of Vision-Based Reinforcement Learning for Drone Navigation”. International Journal of Intelligent Robotics and Applications 8, no. 4 (1 December 2024): 974–92. https://doi.org/10.1007/s41315-024-00356-9.

Bai, Wenqi, Xiaohui Zhang, Shiliang Zhang, Songnan Yang, Yushuai Li, and Tingwen Huang. “Long-Distance Geomagnetic Navigation in GNSS-Denied Environments with Deep Reinforcement Learning”. arXiv [Cs.RO], 2024. arXiv. http://arxiv.org/abs/2410.15837.

Chang, Yingxiu, Yongqiang Cheng, Umar Manzoor, and John Murray. “A Review of UAV Autonomous Navigation in GPS-Denied Environments”. Robotics and Autonomous Systems 170 (1 December 2023): 104533. https://doi.org/10.1016/j.robot.2023.104533.

Doukhi, Oualid, and Deok-Jin Lee. “Deep Reinforcement Learning for End-to-End Local Motion Planning of Autonomous Aerial Robots in Unknown Outdoor Environments: Real-Time Flight Experiments”. Sensors 21, no. 7 (2021): 2534. https://doi.org/10.3390/s21072534.

Elahian, Samaneh, Mohammad Ali Amiri Atashgah, and Iswanto Suwarno. “Enhancing Autonomous Navigation in GNSS-Denied Environment: Obstacle Avoidance Observability-Based Path Planning for ASLAM”. Journal of Robotics and Control (JRC) 5, no. 6 (November 2024): 2048–59. https://doi.org/10.18196/jrc.v5i6.23519.

Ou, Yang, Yiyi Cai, Youming Sun, and Tuanfa Qin. “Autonomous Navigation by Mobile Robot with Sensor Fusion Based on Deep Reinforcement Learning”. Sensors 24, no. 12 (2024): 3895. https://doi.org/10.3390/s24123895.

Samma, Hussein, and Sami El-Ferik. “Autonomous UAV Visual Navigation Using an Improved Deep Reinforcement Learning”. IEEE Access: Practical Innovations, Open Solutions 12 (2024): 79967–77. https://doi.org/10.1109/access.2024.3409780.

Sivashangaran, Shathushan, Apoorva Khairnar, and Azim Eskandarian. “Exploration Without Maps via Zero-Shot Out-of-Distribution Deep Reinforcement Learning”. arXiv [Cs.RO], 2024. arXiv. http://arxiv.org/abs/2402.05066.

Sivashangaran, Shathushan, Apoorva Khairnar, and Azim Eskandarian. “Exploration Without Maps via Zero-Shot Out-of-Distribution Deep Reinforcement Learning”. arXiv [Cs.RO], 2024. arXiv. http://arxiv.org/abs/2402.05066.

Tezerjani, Mohammad Dehghani. “A Survey on Reinforcement Learning Applications in SLAM”. Journal of Machine Learning and Deep Learning, Vol.1, Issue 1, 2024. https://doi.org/10.64820/AEPJMLDL.11.20.31.122024.

Wang, Junqiao, Zhongliang Yu, Dong Zhou, Jiaqi Shi, and Runran Deng. “Vision-Based Deep Reinforcement Learning of Unmanned Aerial Vehicle (UAV) Autonomous Navigation Using Privileged Information”. Drones 8, no. 12 (2024): 782. https://doi.org/10.3390/drones8120782.


Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.