Intelligent vehicles semantic segmentation using evidential deep learning
DOI:
https://doi.org/10.60643/urai.v2023p40Keywords:
cross-fusion, evidential deep-learning, perception, uncertaintyAbstract
Autonomous cars encounter momentous challenges in the perception tasks. The driving surrounding areas are more and more congested and the weather conditions differ significantly. Sensors-wise the capacities have increased, leading to an increasing interest in big data management such as artificial intelligence. Currently, neural networks have proved their efficiency, but restraints in complex situations are still present. In this work, a cross-fusion technique that combines lidar and camera data using an encoder-decoder-based model is proposed. The multi-modal architecture fuses different sources of information to circumvent encountered limitations. The considered perception task is semantic segmentation of the different obstacles that may be encountered. The decision-making part of the architecture is extended with the evidence theory, introducing belief functions that contribute in handling uncertainties. Thus, the evidential formulation is versatile and yields more precise predictions and a better understanding of the vacuous data. The dataset used in this work employs the KITTI dataset for semantic segmentation. The results show the interest of integrating evidential theory into neural networks fusing information from two heterogeneous sensors.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 D˘anut¸-Vasile Giurgi, Mihreteab Negash Geletu, Thomas Josso-Laurain, Maxime Devanne, Jean-Philippe Lauffenburger, Mengesha Mamo Wogari

This work is licensed under a Creative Commons Attribution 4.0 International License.