Education, Science, Technology, Innovation and Life
Open Access
Sign In

A Review of Common Datasets and Advanced Algorithms of Visual SLAM in Dynamic Scenes

Download as PDF

DOI: 10.23977/acss.2025.090101 | Downloads: 47 | Views: 869

Author(s)

Dazheng Wang 1

Affiliation(s)

1 Yunnan Normal University, Kunming, Yunnan, China

Corresponding Author

Dazheng Wang

ABSTRACT

Simultaneous Localization and Mapping (SLAM) technology can help mobile intelligent robots to understand and perceive scenes in unknown environments, so it is playing an increasingly important role in the fields of intelligent robots, intelligent cars, and so on. And because camera sensors have wide applicability, visual SLAM in dynamic scenes has become a relatively popular research direction in recent years. And the SLAM algorithm needs to be tested and validated, which requires choosing appropriate datasets based on different application scenarios. Therefore, this paper comprehensively introduces the excellent open-source data sets commonly used in the research of visual SLAM. Since deep learning networks are often used to improve the performance of the SLAM system, this paper summarizes the advanced techniques in recent years for solving the common problems of visual SLAM in dynamic scenes based on deep learning networks.
 

KEYWORDS

visual SLAM, dataset, dynamic scenes, deep learning networks, robots

CITE THIS PAPER

Dazheng Wang, A Review of Common Datasets and Advanced Algorithms of Visual SLAM in Dynamic Scenes. Advances in Computer, Signals and Systems (2025) Vol. 9: 1-7. DOI: http://dx.doi.org/10.23977/acss.2025.090101.

REFERENCES

[1] Li Z, Xu B, Wu D, et al. A YOLO-GGCNN based grasping framework for mobile robots in unknown environments[J]. Expert Systems with Applications, 2023, 225: 119993.
[2] Wang Y, Tian Y, Chen J, et al. A survey of visual SLAM in dynamic environment: the evolution from geometric to semantic approaches[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73, 1-21.
[3] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The kitti vision benchmark suite; proceedings of the 2012 IEEE conference on computer vision and pattern recognition, IEEE, F, 2012 [C]. 
[4] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems; proceedings of the 2012 IEEE/RSJ international conference on intelligent robots and systems, IEEE, F, 2012 [C]. 
[5] PALAZZOLO E, BEHLEY J, LOTTES P, et al. ReFusion: 3D reconstruction in dynamic environments for RGB-D cameras exploiting residuals; proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, F, 2019 [C]. 
[6] Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-1163.
[7] Maddern W, Pascoe G, Linegar C, et al. 1 year, 1000 km: The oxford robotcar dataset[J]. The International Journal of Robotics Research, 2017, 36(1): 3-15. 
[8] RAMEZANI M, WANG Y, CAMURRI M, et al. The newer college dataset: Handheld lidar, inertial and vision with IEEE, ground truth; proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), F, 2020 [C]. 
[9] Jeong J, Cho Y, Shin Y S, et al. Complex urban dataset with multi-level sensors from highly diverse urban environments[J]. The International Journal of Robotics Research, 2019, 38(6): 642-657. 
[10] Bescos B, Fácil J M, Civera J, et al. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083.
[11] LI J, LUO J. Approach to 3D SLAM for Mobile Robot Based on RGB-D Image with Semantic Feature in Dynamic Environment [J]. Journal of Intelligent & Robotic Systems, 2023, 109(1): 15.
[12] Zhang X, Wang X, Zhang R. Dynamic Semantics SLAM Based on Improved Mask R-CNN[J]. IEEE Access, 2022, 10: 126525-126535.
[13] Zhang X, Shi Z. MDP-SLAM: A Visual SLAM towards a Dynamic Indoor Scene Based on Adaptive Mask Dilation and Dynamic Probability[J]. Electronics, 2024, 13(8): 1497.
[14] Liu L, Guo J, Zhang R. YKP-SLAM: A Visual SLAM Based on Static Probability Update Strategy for Dynamic Environments[J]. Electronics, 2022, 11(18): 2872.
[15] Yu Y, Zhu K, Yu W. YG-SLAM: GPU-Accelerated RGBD-SLAM Using YOLOv5 in a Dynamic Environment[J]. Electronics, 2023, 12(20): 4377.
[16] Liu H, Luo J. YES-SLAM: YOLOv7-enhanced-semantic visual SLAM for mobile robots in dynamic scenes[J]. Measurement Science and Technology, 2023, 35(3): 035117.
[17] Li Y, Wang Y, Lu L, et al. YOD-SLAM: An Indoor Dynamic VSLAM Algorithm Based on the YOLOv8 Model and Depth Information[J]. Electronics, 2024, 13(18): 3633.
[18] Chang J, Dong N, Li D. A real-time dynamic object segmentation framework for SLAM system in dynamic scenes[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-9. 
[19] Li J, Luo J. YS-SLAM: YOLACT++ based semantic visual SLAM for autonomous adaptation to dynamic environments of mobile robots[J]. Complex & Intelligent Systems, 2024: 1-22. 
[20] Hu Z, Zhao J, Luo Y, et al. Semantic SLAM based on improved DeepLabv3⁺ in dynamic scenarios[J]. IEEE Access, 2022, 10: 21160-21168.
[21] Wen S, Li X, Liu X, Li J, Tao S, Long Y, Qiu T. Dynamic slam: A visual slam in outdoor dynamic scenes. IEEE Transactions on Instrumentation and Measurement. 2023 Sep 20. DOI: 10.1109/TIM.2023.3317378.

Downloads: 38553
Visits: 697953

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.