Uncertainty Quantification in Learning-Based Perception for Safety-Critical Autonomous Systems: Methods and Applications

Jueying Li, Dharani Jaganathan

Abstract


Learning-based perception systems using deep neural networks have achieved remarkable performance in autonomous systems, yet their deployment in safety-critical applications requires rigorous uncertainty quantification to identify when predictions may be unreliable and trigger appropriate safety responses. This paper comprehensively examines methods for uncertainty quantification in perception systems and their applications across unmanned system domains. We systematically distinguish aleatoric uncertainty arising from inherent sensor noise and environmental variability, from epistemic uncertainty reflecting model limitations and insufficient training data. The review analyzes uncertainty quantification approaches including Bayesian neural networks using weight distributions rather than point estimates, Monte Carlo dropout approximating Bayesian inference through stochastic forward passes, ensemble methods aggregating predictions from multiple models, and evidential deep learning directly predicting uncertainty through evidential distributions. Particular emphasis is placed on computational efficiency for real-time deployment, examining approximation techniques and hardware acceleration strategies. We evaluate calibration quality—the alignment between predicted confidence and actual accuracy—using reliability diagrams and expected calibration error metrics. Application-specific implementations are analyzed for object detection with uncertainty-aware bounding boxes, semantic segmentation with per-pixel confidence estimates, and depth estimation with uncertainty maps. The paper examines integration of uncertainty information into downstream decision-making including risk-aware planning that avoids high-uncertainty regions, active learning selecting informative samples for labeling, and anomaly detection flagging out-of-distribution inputs. Validation methodologies are discussed including testing on corrupted data, adversarial examples, and domain shift scenarios. The review identifies research gaps in uncertainty quantification for temporal models, multi-modal fusion, and providing formal safety guarantees based on uncertainty estimates.

Keywords


uncertainty quantification, Bayesian deep learning, safety-critical systems, perception reliability.

References


Caldeira, João, and Brian Nord. “Deeply Uncertain: Comparing Methods of Uncertainty Quantification in Deep Learning Algorithms”. Machine Learning: Science and Technology 2, no. 1 (December 2020): 015002. https://doi.org/10.1088/2632-2153/aba6f3.

K, Swaroop Bhandary, Nico Hochgeschwender, Paul Plöger, Frank Kirchner, and Matias Valdenegro-Toro. “Evaluating Uncertainty Estimation Methods on 3D Semantic Segmentation of Point Clouds”. arXiv [Cs.CV], 2020. arXiv. Available at http://arxiv.org/abs/2007.01787.

Kendall, Alex, and Yarin Gal. “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?” In Advances in Neural Information Processing Systems, edited by I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Vol. 30. Curran Associates, Inc., 2017. Available at https://proceedings.neurips.cc/paper_files/paper/2017/file/2650d6089a6d640c5e85b2b88265dc2b-Paper.pdf.

Pearce, Tim, Felix Leibfried, and Alexandra Brintrup. “Uncertainty in Neural Networks: Approximately Bayesian Ensembling”. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, edited by Silvia Chiappa and Roberto Calandra, 108:234–44. Proceedings of Machine Learning Research. PMLR, 26--28 Aug 2020. Available at https://proceedings.mlr.press/v108/pearce20a.html.

Schwaiger, Adrian, Poulami Sinhamahapatra, Jens Gansloser, and Karsten Roscher. “Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?”, 2020. https://doi.org/10.24406/publica-fhg-408442.

Subedar, Mahesh, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, and Jonathan Huang. “Uncertainty-Aware Audiovisual Activity Recognition Using Deep Bayesian Variational Inference”. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019. Available at https://openaccess.thecvf.com/content_ICCV_2019/html/Subedar_Uncertainty-Aware_Audiovisual_Activity_Recognition_Using_Deep_Bayesian_Variational_Inference_ICCV_2019_paper.html.

Wandzik, Lukasz, Raul Vicente Garcia, and Jörg Krüger. “Uncertainty Quantification in Deep Residual Neural Networks”. arXiv [Cs.CV], 2020. arXiv. Available at http://arxiv.org/abs/2007.04905.

Wang, Guotai, Wenqi Li, Michael Aertsen, Jan Deprest, Sébastien Ourselin, and Tom Vercauteren. “Aleatoric Uncertainty Estimation with Test-Time Augmentation for Medical Image Segmentation with Convolutional Neural Networks”. Neurocomputing 338 (2019): 34–45. https://doi.org/10.1016/j.neucom.2019.01.103.

Zhu, Yinhao, and Nicholas Zabaras. “Bayesian Deep Convolutional Encoder–Decoder Networks for Surrogate Modeling and Uncertainty Quantification”. Journal of Computational Physics 366 (2018): 415–47. https://doi.org/10.1016/j.jcp.2018.04.018.


Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.