Simultaneous Localization and Mapping (SLAM) in Dynamic Environments: State-of-the-Art and Simulation Benchmarking

Chen-Kim Lim

Abstract


Traditional SLAM algorithms assume static environments, yet real-world unmanned systems must navigate spaces populated with moving objects—pedestrians, vehicles, and other robots—requiring dynamic SLAM approaches that distinguish static structure from transient elements. This paper provides a comprehensive state-of-the-art review of dynamic SLAM methods and presents extensive simulation benchmarking across diverse scenarios. We systematically categorize dynamic SLAM approaches into geometric methods that detect motion through multi-view geometry, semantic methods leveraging object recognition to identify potentially dynamic entities, and learning-based methods that implicitly model scene dynamics through neural networks. The review examines detection strategies for dynamic objects including motion segmentation via optical flow, background subtraction techniques, and deep learning-based instance segmentation, analyzing their computational complexity and accuracy trade-offs. We evaluate tracking and mapping strategies that handle dynamic elements: filtering approaches that reject dynamic features, dual-layer maps separating static and dynamic components, and probabilistic frameworks that model object motion patterns. Particular emphasis is placed on semantic SLAM systems integrating object detection networks (YOLO, Mask R-CNN) with traditional SLAM pipelines, examining how semantic understanding improves robustness in crowded environments. The paper presents a comprehensive simulation benchmarking framework implementing twelve prominent dynamic SLAM algorithms across standardized datasets and custom scenarios with controlled dynamic content. Benchmark environments span indoor navigation with pedestrian traffic, urban driving with vehicular motion, and warehouse operations with mobile robots, systematically varying the proportion of dynamic content (10-70% of visible scene). Performance evaluation encompasses localization accuracy, map consistency, computational efficiency, and robustness to different motion patterns (linear, erratic, occluding). Simulation results reveal that semantic approaches achieve superior accuracy in highly dynamic scenes but require significant computational resources, while geometric methods offer better efficiency with degraded performance as dynamic content increases. We investigate failure modes including dynamic object misclassification, map corruption from undetected motion, and tracking loss during rapid scene changes. The benchmarking examines sensor modality impacts, comparing camera-based, LiDAR-based, and multi-modal dynamic SLAM across varying lighting and weather conditions.

Keywords


SLAM, dynamic environments, mobile robotics, localization and mapping.

References


Abati, Gabriel Fischer, João Carlos Virgolino Soares, Vivian Suzano Medeiros, Marco Antonio Meggiolaro, and Claudio Semini. “Panoptic-SLAM: Visual SLAM in Dynamic Environments Using Panoptic Segmentation”. arXiv [Cs.RO], 2024. arXiv. http://arxiv.org/abs/2405.02177.

C. Ruan, Q. Zang, K. Zhang, and K. Huang. “DN-SLAM: A Visual SLAM With ORB Features and NeRF Mapping in Dynamic Environments”. IEEE Sensors Journal 24, no. 4 (2024): 5279–87. https://doi.org/10.1109/JSEN.2023.3345877.

Ding, Weili, Ziqi Pei, Tao Yang, and Taiyu Chen. “Dynamic Simultaneous Localization and Mapping Based on Object Tracking in Occluded Environment”. Robotica 42, no. 7 (2024): 2209–25. https://doi.org/10.1017/S0263574724000420.

Gan, Fubao, Shanyong Xu, Linya Jiang, Yuwen Liu, Quanzeng Liu, and Shihao Lan. “Robust Visual SLAM Algorithm Based on Target Detection and Clustering in Dynamic Scenarios”. Frontiers in Neurorobotics 18–2024 (2024). https://doi.org/10.3389/fnbot.2024.1431897.

Liu, Heng, Lele Niu, and Yufan Deng. “Dynamic Object Detection and Tracking in Vision SLAM”. Applied Mathematics and Nonlinear Sciences 9, no. 1 (2024). https://doi.org/10.2478/amns-2024-1174.

Li, Feiya, Chunyun Fu, Dongye Sun, Jian Li, and Jianwen Wang. “SD-SLAM: A Semantic SLAM Approach for Dynamic Scenes Based on LiDAR Point Clouds”. Big Data Research 36 (28 May 2024): 100463. https://doi.org/10.1016/j.bdr.2024.100463.

Li, Mingrui, Yiming Zhou, Guangan Jiang, Tianchen Deng, Yangyang Wang, and Hongyu Wang. “DDN-SLAM: Real-Time Dense Dynamic Neural Implicit SLAM”. arXiv [Cs.CV], 2024. arXiv. http://arxiv.org/abs/2401.01545.

Li, Xing, Yehu Shen, Jinbin Lu, Quansheng Jiang, Ou Xie, Yong Yang, and Qixin Zhu. “DyStSLAM: An Efficient Stereo Vision SLAM System in Dynamic Environment”. Measurement Science and Technology 34, no. 2 (1 November 2022): 025105. https://doi.org/10.1088/1361-6501/ac97b1.

Peng, Hongrui, Ziyu Zhao, and Liguan Wang. 2024. “A Review of Dynamic Object Filtering in SLAM Based on 3D LiDAR”. Sensors 24, no. 2: 645. https://doi.org/10.3390/s24020645.

Qian, Wei, Jiansheng Peng, and Hongyu Zhang. 2024. “DFD-SLAM: Visual SLAM with Deep Features in Dynamic Environment”. Applied Sciences 14, no. 11: 4949. https://doi.org/10.3390/app14114949.

Virgolino Soares, João Carlos, Vivian Suzano Medeiros, Gabriel Fischer Abati, Marcelo Becker, Glauco Caurin, Marcelo Gattass, and Marco Antonio Meggiolaro. “Visual Localization and Mapping in Dynamic and Changing Environments”. Journal of Intelligent & Robotic Systems 109, no. 4 (15 December 2023): 95. https://doi.org/10.1007/s10846-023-02019-6.

Xu, Ziheng, Jianwei Niu, Qingfeng Li, Tao Ren, and Chen Chen. “NID-SLAM: Neural Implicit Representation-Based RGB-D SLAM in Dynamic Environments”. arXiv [Cs.RO], 2024. arXiv. http://arxiv.org/abs/2401.01189.

Yu, Han, Qing Wang, Chao Yan, Youyang Feng, Yang Sun, and Lu Li. 2024. “DLD-SLAM: RGB-D Visual Simultaneous Localisation and Mapping in Indoor Dynamic Environments Based on Deep Learning”. Remote Sensing 16, no. 2: 246. https://doi.org/10.3390/rs16020246.

Zhong, Meiling, Chuyuan Hong, Zhaoqian Jia, Chunyu Wang, and Zhiguo Wang. “DynaTM-SLAM: Fast Filtering of Dynamic Feature Points and Object-Based Localization in Dynamic Indoor Environments”. Robotics and Autonomous Systems 174 (1 April 2024): 104634. https://doi.org/10.1016/j.robot.2024.104634.


Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.