Education, Science, Technology, Innovation and Life
Open Access
Sign In

Visual Semantic SLAM with Compensation of Edge Feature Points in Dynamic Scenes

Download as PDF

DOI: 10.23977/acss.2025.090107 | Downloads: 23 | Views: 596

Author(s)

Tianwei Liu 1

Affiliation(s)

1 Yunnan Normal University, Kunming, China

Corresponding Author

Tianwei Liu

ABSTRACT

Mobile robots operating in dynamic environments are easily affected by moving objects, which results in significant discrepancies between the localization and mapping results provided by their SLAM systems and the actual conditions. To address this issue, this paper proposes a visual semantic SLAM approach for mobile robots in dynamic scenes. First, a lightweight instance segmentation network, Yolact++, is employed to segment and detect objects in the scene, removing feature points from the regions of the a priori dynamic objects. Furthermore, an LBP (Local Binary Pattern) algorithm that incorporates depth information is designed to prevent the misclassification of edge feature points from dynamic objects, thus preserving static features for the system's pose estimation. A series of simulation analyses validate the performance advantages of the proposed method.

KEYWORDS

Dynamic Scenes, Mobile Robots, Semantic SLAM

CITE THIS PAPER

Tianwei Liu, Visual Semantic SLAM with Compensation of Edge Feature Points in Dynamic Scenes. Advances in Computer, Signals and Systems (2025) Vol. 9: 42-49. DOI: http://dx.doi.org/10.23977/acss.2025.090107.

REFERENCES

[1] R. A. Newcombe, S. J. Lovegrove and A. J. Davison, "DTAM: Dense tracking and mapping in real-time," 2011 International Conference on Computer Vision, Barcelona, Spain, 2011, pp. 2320-2327, doi: 10.1109/ICCV. 2011.6126513.
[2] Engel, Jakob, Thomas Schöps, and Daniel Cremers. "LSD-SLAM: Large-scale direct monocular SLAM." European conference on computer vision. Cham: Springer International Publishing, 2014.
[3] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry [J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(3): 611-625.
[4] MUR-ARTAL R, TARDóS J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras [J]. IEEE transactions on robotics, 2017, 33(5): 1255-1262.
[5] PUMAROLA A, VAKHITOV A, AGUDO A, et al. PL-SLAM: Real-time monocular visual SLAM with points and lines; proceedings of the 2017 IEEE international conference on robotics and automation (ICRA), F, 2017 [C]. IEEE.
[6] ZHOU C. Yolact++ Better Real-Time Instance Segmentation [M]. University of California, Davis, 2020.
[7] WANG K, MA S, CHEN J, et al. Approaches, challenges, and applications for deep visual odometry: Toward complicated and emerging areas [J]. IEEE Transactions on Cognitive and Developmental Systems, 2020, 14(1): 35-49.
[8] CHENG J, WANG C, MENG M Q-H. Robust visual localization in dynamic environments based on sparse motion removal [J]. IEEE Transactions on Automation Science and Engineering, 2019, 17(2): 658-669.
[9] DAI W, ZHANG Y, LI P, et al. Rgb-d slam in dynamic environments using point correlations [J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 44(1): 373-389. 
[10] DEROME M, PLYER A, SANFOURCHE M, et al. Moving object detection in real-time using stereo from a mobile platform [J]. Unmanned Systems, 2015, 3(04): 253-266.
[11] EPPENBERGER T, CESARI G, DYMCZYK M, et al. Leveraging stereo-camera data for real-time dynamic obstacle detection and tracking; proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), F, 2020 [C]. IEEE. 
[12] YU C, LIU Z, LIU X-J, et al. DS-SLAM: A semantic visual SLAM towards dynamic environments; proceedings of the 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), F, 2018 [C]. IEEE.
[13] BESCOS B, FáCIL J M, CIVERA J, et al. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes [J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083.
[14] JIN J, JIANG X, YU C, et al. Dynamic visual simultaneous localization and mapping based on semantic segmentation module [J]. Applied Intelligence, 2023, 53(16): 19418-19432.
[15] WANG X, ZHENG S, LIN X, et al. Improving RGB-D SLAM accuracy in dynamic environments based on semantic and geometric constraints [J]. Measurement, 2023, 217: 113084.
[16] WEI S, WANG S, LI H, et al. A Semantic Information-Based Optimized vSLAM in Indoor Dynamic Environments [J]. Applied Sciences, 2023, 13(15): 8790. 
[17] ZHENG Z, LIN S, YANG C. RLD-SLAM: A Robust Lightweight VI-SLAM for Dynamic Environments Leveraging Semantics and Motion Information [J]. IEEE Transactions on Industrial Electronics, 2024.
[18] LIN T-Y, MAIRE M, BELONGIE S, et al. Microsoft coco: Common objects in context; proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, F, 2014 [C]. Springer.  
[19] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems; proceedings of the 2012 IEEE/RSJ international conference on intelligent robots and systems, F, 2012 [C]. IEEE.
[20] PALAZZOLO E, BEHLEY J, LOTTES P, et al. ReFusion: 3D reconstruction in dynamic environments for RGB-D cameras exploiting residuals; proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), F, 2019 [C]. IEEE.
[21] RUAN C, ZANG Q, ZHANG K, et al. DN-SLAM: A Visual SLAM with ORB Features and NeRF Mapping in Dynamic Environments [J]. IEEE Sensors Journal, 2023.
[22] ZHONG M, HONG C, JIA Z, et al. DynaTM-SLAM: Fast filtering of dynamic feature points and object-based localization in dynamic indoor environments [J]. Robotics and Autonomous Systems, 2024, 174: 104634.

Downloads: 38554
Visits: 697962

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.