Education, Science, Technology, Innovation and Life
Open Access
Sign In

Overview of Visual SLAM Technology: From Traditional to Deep Learning Methods

Download as PDF

DOI: 10.23977/acss.2023.071011 | Downloads: 25 | Views: 409

Author(s)

Keting Huang 1

Affiliation(s)

1 Shenyang Ligong University, Shenyang, Liaoning, China

Corresponding Author

Keting Huang

ABSTRACT

SLAM refers to the problem of simultaneous localization and mapping (SLAM) of mobile robots in unknown environments. With the development of robot technology and artificial intelligence technology, SLAM has become an important technology and is widely used in all aspects of production and life. The importance of SLAM is self-evident for autonomous vehicles. SLAM is a very strange field for most people. In the wave of artificial intelligence, more and more enterprises and universities have invested in the research of visual SLAM. In this paper, we will introduce several classic visual SLAM algorithms and discuss their applications in the robot field, propose some problems in the current SLAM research field, and look forward to the future development direction of visual SLAM research.

KEYWORDS

Simultaneous localization and mapping; Laser SLAM; Visual SLAM; Deep learning; Map construction

CITE THIS PAPER

Keting Huang, Overview of Visual SLAM Technology: From Traditional to Deep Learning Methods. Advances in Computer, Signals and Systems (2023) Vol. 7: 76-81. DOI: http://dx.doi.org/10.23977/acss.2023.071011.

REFERENCES

[1] R. Smith and P. Cheesman. On the representation of spatial uncertainty [J]. Int. J. Robot. Res., 1987, 5(4): 56–68.
[2] H.F. Durrant-Whyte. Uncertain geometry in robotics [J]. IEEE Trans. Robot. Automat., 1988, 4(1): 23–31.
[3] H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part I[J]. IEEE Robotics & Automation Magazine, 2006, 13(2): 99-110.
[4] J. Civera, A. J. Davison and J. M. M. Montiel. Inverse Depth Parametrization for Monocular SLAM[J].IEEE Transactions on Robotics, 2008, 24(5): 932-945.
[5] G. Grisetti, C. Stachniss and W. Burgard. Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters [J]. IEEE Transactions on Robotics, 2007, 23(1): 34-46.
[6] Gerkey, Brian P., Morgan Quigley, and Ken Conley. Karto SLAM: An Open-Source Toolkit for SLAM[C] // Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), May 2010, Anchorage, Alaska, USA.
[7] Hess, Wolfgang, Damon Kohler, Holger Rapp, and Daniel Andor. Cartographer: Real-Time SLAM for 2D and 3D Mapping[C]//In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), May 2016, Stockholm, Sweden.
[8] Montemarlo M. Fast SLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem[C]//Proc of Theaaai National Conference on Artificial Intelligence. American Association for Artificial Intelligence, 2002.
[9] DAVISON J, REID D I, MOLTON D N, et al. MonoSLAM: Real-Time Single Camera SLAM[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6):1052-1067.
[10] P. Moghadam, W. S. Wijesoma and Dong Jun Feng. Improving path planning and mapping based on stereo vision and lidar[C]//2008 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 2008.
[11] J. Zhang and S. Singh. Visual-lidar odometry and mapping: low-drift, robust, and fast[C]//2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015. 
[12] Y. Xu, Y. Ou and T. Xu. SLAM of Robot based on the Fusion of Vision and LIDAR[C]//2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 2018. 
[13] L. Mu, P. Yao, Y. Zheng, K. Chen, F. Wang and N. Qi. Research on SLAM Algorithm of Mobile Robot Based on the Fusion of 2D LiDAR and Depth Camera [J].IEEE Access, 2020, 8: 157628-157642. 
[14] J. Yin, D. Luo, F. Yan and Y. Zhuang. A Novel Lidar-Assisted Monocular Visual SLAM Framework for Mobile Robots in Outdoor Environments [J].IEEE Transactions on Instrumentation and Measurement, 2022, 71:1-11.
[15] X. Cheng, K. Geng, G. Yin, Y. Sun, J. Wang and P. Ding. Semantic Mapping Optimization Based on LIDAR and Camera Data Fusion for autonomous vehicle[C]//2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), Nanjing, China, 2022. 
[16] S. Li, S. Jing, Q. Yue and Y. Zhang. Static Map Building Scheme for Vision and Lidar Fusion[C]//2023 4th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), Nanjing, China, 2023.
[17] M. Frosi and M. Matteucci. D3VIL-SLAM: 3D Visual Inertial LiDAR SLAM for Outdoor Environments[C]//2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 2023. 
[18] Z. Liu, Y. Liu, X. Wang and D. Zhu. Visual-LiDAR Fusion Relocation for SLAM Systems[C]//2023 International Conference on Frontiers of Robotics and Software Engineering (FRSE), Changsha, China, 2023. 
[19] F. Sauerbeck, B. Obermeier, M. Rudolph and J. Betz. RGB-L: Enhancing Indirect Visual SLAM Using LiDAR-Based Dense Depth Maps[C]//2023 3rd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2023. 
[20] C. Forster, M. Pizzoli and D. Scaramuzza. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 2014. 
[21] R. Mur-Artal, J. M. M. Montiel and J. D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System [J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[22] J. Engel, V. Koltun and D. Cremers. Direct Sparse Odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625.
[23] Schubert D, Demmel N, Usenko V, et al. Direct Sparse Odometry with Rolling Shutter[C]//IEEE.IEEE, 2018.
[24] Lynen S, Achtelik M W, Weiss S, et al. Robust and Modular Multi-Sensor Fusion Approach Applied to MAV Navigation[C]//IEEE.IEEE, 2013.
[25] M. Bloesch, S. Omari, M. Hutter and R. Siegwart. Robust visual inertial odometry using a direct EKF-based approach [C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015.
[26] T. Qin, P. Li and S. Shen. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator [J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
[27] X. Gao, R. Wang, N. Demmel and D. Cremers. LDSO: Direct Sparse Odometry with Loop Closure[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 2018. 
[28] P. Zeng, S. Pan, S. Wang, L. Huang and F. Ye. A Weighing Euler Pre-integration Method in the Visual-Inertial Odometry[C]//2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services (UPINLBS), Wuhan, China, 2018. 
[29] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. J. Kelly and A. J. Davison. SLAM++: Simultaneous Localisation and Mapping at the Level of Objects [C] //2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013. 
[30] J. McCormac, A. Handa, A. Davison and S. Leutenegger. SemanticFusion: Dense 3D semantic mapping with convolutional neural networks [C]//2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017.
[31] S. Wang, R. Clark, H. Wen and N. Trigoni. Deep VO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks[C]//2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017.
[32] B. Bescos, J. M. Fácil, J. Civera and J. Neira. Dyna SLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes [J]. IEEE Robotics and Automation Letters, 2018, 3(4):4076-4083.
[33] S. Yang and S. Scherer. CubeSLAM: Monocular 3-D Object SLAM[J].IEEE Transactions on Robotics, 2019, 35(4): 925-938.
[34] Nicholson L, Milford M, Sünderhauf N. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam [J]. IEEE Robotics and Automation Letters, 2019, 4(1): 1-8.
[35] W. Shen, Y. Jia, M. Li and J. Zhu. A New Semantic SLAM Mapping Algorithm Based on Improved YOLOv5 [C]//2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 2021.

Downloads: 16006
Visits: 271086

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.