Open Access Open Access  Restricted Access Subscription Access

Image Processing in Optical Guidance for Autonomous Landing of Lunar Probe

Ding Meng, Yun-Feng Cao

Abstract


Because of the communication delay between earth and moon, the GNC technology of lunar probe is becoming more important than ever. Current navigation technology is not able to provide precise motion estimation for probe landing control system Computer vision offers a new approach to solve this problem. In this paper, author introduces an image process algorithm of computer vision navigation for autonomous landing of lunar probe. The purpose of the algorithm is to detect and track feature points which are factors of navigation. Firstly, fixation areas are detected as sub-images and matched. Secondly, feature points are extracted from sub-images and tracked. Computer simulation demonstrates the result of algorithm takes less computation and fulfils requests of navigation algorithm.

Full Text:

PDF

References


Ye Peijian, Peng Jing. Deep Space Exploration and Prospect in China. Engineering Science. Vol.8 No.10 oct.2006

Larry H.Matthies and Andrew E.Johnson. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration. 5th International Symposium on Artificial Intelligence and Automation in Space,Noordwijk,June 1999:627-634.

Scott M.Ettinger, Michael C.Nechyba and Peter D.lfju. Vision-Guided Flight Stability and Control for Micro Air Vehicles. IEEE Int. Conf. on Intelligent Robots and Systems, 2002:2134-2140.

Srikanth Saripalli, James F.Montgomery and Gaurav S.Sukhatme. Vision-Based Autonomous Landing of an Unmmaned Aerial Vehicle. Preceeding of IEEE International Conference on Robotics and Automation,2002:2799-2804.

Cucchiara R, Grana C, Piccardi M, et al. Statistic and Knowledge-based moving object detection in traffic scenes. Proceedings of IEEE Intelligent Transportation Systems, 2000

J.K. Miller et al.“Navigation analysis For Eros rendezvous and orbital phases.“ Journal Astronautical Sciences, vol.43, No.4, pp.453-476.1995

R. Szeliski and S.B. Kang. Recovering 3-D shape and motion from image streams using non-linear least squares. Journal Visual Communication and Image Representation, vol. 5, No.1, pp 10-28, March 1994.

Kawaguchi, J., Hashimoto, T., Kubota, T., Sawai, S., and Fujii, G. (1997) Autonomous optical guidance and navigation strategy around a small body. AIAA Journal of Guidance, Control, and Dynamics, 20, 5(Sept.—Oct. 1997), 1010—1017.

A. Johnson, A. Klump, J. Collier, and AronWolf, “LIDAR-Base Hazard Avoidance for Safe Landing on Mars," AAS/AIAA Space Flight Mechanics Meeting, Santa Barbara, CA,February 2001.

TOSHIHIKO MISU TATSUAKI HASHIMOTO KEIKEN NINOMIYA, Optical Guidance for Autonomous Landing of Spacecraft, IEEE Transactions on Aerospace and Electronic Vol. 35, No. 2 April 1999

Zheng, Q., and Chellappa, R. (1993) A computational approach to image registration. IEEE Transactions on Image Processing, 2, 3 (July 1993), 311—326.

S. Smith and J. Brady, “4sset-2: Real-time Motion Segmentation and Shape Tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 8, no. 17, pp. 814-820, 1995.

Jianbo Shi and Carlo Tomasi. Good Features to Track. IEEE Conf.Comput.Vision and Pattern Recognition, 1994:593-600.

A.Benedetti, P.Perona. Real-time 2-D Feature Detection on a Reconfigurable Computer. IEEE Conf. Computer Vision and Pattern Recognition,1998:586-593.

Tiziano Tommasini, Andrea Fusiello. Making Good Feature Tracker Better. Santa Barbara, USA, IEEE Comput.Soc, Conf. Computer Vision and Pattern Recognition, 1998:178-182.




DOI: http://dx.doi.org/10.21535%2FProICIUS.2007.v3.575

Refbacks

  • There are currently no refbacks.