Vision-based Navigation Systems for Autonomous Spacecraft Rendezvous: A Comprehensive Review

B. Amutha

Abstract


Autonomous spacecraft rendezvous represents one of the most challenging navigation problems, requiring precise relative state estimation in the absence of GPS, under extreme lighting conditions, and with strict computational and power constraints. This comprehensive review examines vision-based navigation systems that enable autonomous approach and docking operations for on-orbit servicing, debris removal, and formation flying missions. We systematically analyze the evolution of vision-based rendezvous techniques from early marker-based systems to modern learning-based approaches, categorizing methods by their sensing modality: monocular cameras, stereo vision, time-of-flight cameras, and multi-spectral imaging. The review examines pose estimation algorithms spanning feature-based methods (SIFT, SURF, ORB), model-based approaches utilizing known target geometry, and deep learning techniques including convolutional neural networks for keypoint detection and pose regression. Particular attention is devoted to handling the unique challenges of space environments: extreme illumination variations during orbital day-night transitions, lens flare from direct sunlight, motion blur during rapid relative motion, and the lack of atmospheric scattering that produces harsh shadows. We evaluate robustness to non-cooperative targets including tumbling satellites and debris without fiducial markers, analyzing shape-from-silhouette, photometric stereo, and structure-from-motion techniques. The review comprehensively examines sensor fusion architectures combining vision with inertial measurements, star trackers, and LiDAR for enhanced accuracy and reliability, assessing Kalman filtering variants, particle filters, and factor graph optimization approaches. Computational efficiency is critically analyzed, comparing algorithm runtime and memory footprint against available spacecraft processors, with emphasis on hardware acceleration through FPGAs and specialized vision processors. We synthesize validation methodologies including hardware-in-the-loop testbeds with robotic manipulators, proximity operations simulators, and on-orbit demonstration missions, identifying gaps between laboratory performance and space-qualified systems. The review examines emerging technologies including event-based cameras for high dynamic range imaging, neuromorphic vision processors for ultra-low power operation, and AI-based anomaly detection for fault tolerance. Application-specific considerations are discussed for different mission profiles: large cooperative spacecraft with docking ports, small satellites with limited computational resources, and debris with unknown or damaged surfaces.

Keywords


vision-based navigation, spacecraft rendezvous, autonomous docking, pose estimation

References


Chang, Liang, Jixiu Liu, Zui Chen, Jie Bai, and Leizheng Shu. “Stereo Vision-Based Relative Position and Attitude Estimation of Non-Cooperative Spacecraft”. Aerospace 8, no. 8 (2021): 230. https://doi.org/10.3390/aerospace8080230.

Duba, Prasanth Kumar, Naga Praveen Babu Mannam, and P, Rajalakshmi. “Stereo Vision Based Object Detection for Autonomous Navigation in Space Environments”. Acta Astronautica 218 (1 May 2024): 326–29. https://doi.org/10.1016/j.actaastro.2024.02.032.

Guo, Dongwen, Shuang Wu, Desheng Weng, Chenzhong Gao, and Wei Li. “Invariant Feature Matching in Spacecraft Rendezvous and Docking Optical Imaging Based on Deep Learning”. Remote Sensing 16, no. 24 (2024): 4690. https://doi.org/10.3390/rs16244690.

Park, Tae Ha, and Simone D’Amico. “Adaptive Neural-Network-Based Unscented Kalman Filter for Robust Pose Tracking of Noncooperative Spacecraft”. Journal of Guidance, Control, and Dynamics 46, no. 9 (September 2023): 1671–88. https://doi.org/10.2514/1.g007387.

Park, Tae Ha, and Simone D’Amico. “Online Supervised Training of Spaceborne Vision during Proximity Operations Using Adaptive Kalman Filtering”. In 2024 IEEE International Conference on Robotics and Automation (ICRA), 11744–52. IEEE, 2024. https://doi.org/10.1109/icra57147.2024.10610138.

Pauly, Leo, Wassim Rharbaoui, Carl Shneider, Arunkumar Rathinam, Vincent Gaudillière, and Djamila Aouada. “A Survey on Deep Learning-Based Monocular Spacecraft Pose Estimation: Current State, Limitations and Prospects”. Acta Astronautica 212 (1 November 2023): 339–60. https://doi.org/10.1016/j.actaastro.2023.08.001.

Sharma, Sumant, Connor Beierle, and Simone D’Amico. “Pose Estimation for Non-Cooperative Spacecraft Rendezvous Using Convolutional Neural Networks”. arXiv [Cs.CV], 2018. arXiv. http://arxiv.org/abs/1809.07238.

Sharma, Sumant, Connor Beierle, and Simone D’Amico. “Pose Estimation for Non-Cooperative Spacecraft Rendezvous Using Convolutional Neural Networks”. In 2018 IEEE Aerospace Conference, 1–12, 2018. https://doi.org/10.1109/AERO.2018.8396425.


Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.