Real Time Monocular Visual Odometry using ORB Features for Indoor Environment

Bayu Kanugrahan Luknanto, Adha Imam Cahyadi, Sunu Wibirama, Herianto Herianto


Navigation is a key process in many intelligent systems. The most common way to do navigation is using Global Positioning System (GPS). However, in indoor environment, GPS is inaccurate due to multi-path problem. To solve this problem, a dead reckoning system may be implemented. Visual odometry is one of the dead reckoning methods which can help solving this problem. Among many types of visual odometry algorithm, feature-based monocular visual odometry algorithm is proposed in this research. The feature detector used in this work is ORB. The features are used for triangulation and the obtained 3D structure will be used as the basic information for poses determination. Pose determination is done by solving the PnP problem. The algorithm is validated by performing six basic motion tests while the algorithm is running. The result of the tests shows that the visual odometry algorithm can determine the position and orientation with good accuracy.


Navigation; Visual Odometry; Monocular; Feature Detection

Full Text:



M. Agrawal and K. Konolige, “Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS,” in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on. IEEE, 2006, pp. 1063–1068. CrossRef

M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary Robust Independent Elementary Features,” in Proceedings of the 11th European Conference on Computer Vision: Part IV, ser. ECCV’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 778–792. CrossRef

C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” in Proceedings of the 4th Alvey Vision Conference, 1988, pp. 147–151. CrossRef

R. Hartley and A. Zisserman, Multiple view geometry in computer vision, 2nd ed. Cambridge, UK ; New York: Cambridge University Press, 2003.

J. A. Hesch and S. I. Roumeliotis, “A Direct Least-Squares (DLS) method for PnP,” in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, Nov. 2011, pp. 383–390. CrossRef

R. M. Murray, Z. Li, and S. Sastry, A mathematical introduction to robotic manipulation. Boca Raton: CRC Press, 1994.

D. Nister, “An efficient solution to the five-point relative pose problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 6, pp. 756–770, Jun. 2004. CrossRef

M. A. Quddus, W. Y. Ochieng, and R. B. Noland, “Current map-matching algorithms for transport applications: State-of-the art and future research directions,” Transportation Research Part C: Emerging Technologies, vol. 15, no. 5, pp. 312–328, Oct. 2007. CrossRef

P. L. Rosin, “Measuring Corner Properties,” Computer Vision and Image Understanding, vol. 73, no. 2, pp. 291–307, Feb. 1999. CrossRef

E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” in Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on. IEEE, 2005, pp. 1508–1515 Vol. 2. CrossRef

E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, Nov. 2011, pp. 2564–2571. CrossRef

C. Wu, Y. Meng, L. Zhi-lin, C. Yong-qi, and J. Chao, “Tight integration of digital map and in-vehicle positioning unit for car navigation in urban areas,” Wuhan University Journal of Natural Sciences, vol. 8, no. 2, pp. 551–556, Jun. 2003. CrossRef

D. Scaramuzza and F. Fraundorfer, “Visual Odometry [Tutorial],” IEEE Robotics & Automation Magazine, vol. 18, no. 4, pp. 80–92, Dec. 2011. CrossRef

D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” in Proc. Int. Conf. Computer Vision and Pattern Recognition, 2004, pp. 652–659. CrossRef

P. I. Corke, D. Strelow, and S. Singh, “Omnidirectional visual odometry for a planetary rover,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2005, pp. 4007–4012.



  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.