Intelligent vehicles semantic segmentation using evidential deep learning

Authors

  • D˘anut¸-Vasile Giurgi IRIMAS, Université de Haute-Alsace
  • Mihreteab Negash Geletu Addis Ababa University
  • Thomas Josso-Laurain IRIMAS, Université de Haute-Alsace
  • Maxime Devanne IRIMAS, Université de Haute-Alsace
  • Jean-Philippe Lauffenburger IRIMAS, Université de Haute-Alsace
  • Mengesha Mamo Wogari Addis Ababa University

DOI:

https://doi.org/10.60643/urai.v2023p40

Keywords:

cross-fusion, evidential deep-learning, perception, uncertainty

Abstract

Autonomous cars encounter momentous challenges in the perception tasks. The driving surrounding areas are more and more congested and the weather conditions differ significantly. Sensors-wise the capacities have increased, leading to an increasing interest in big data management such as artificial intelligence. Currently, neural networks have proved their efficiency, but restraints in complex situations are still present. In this work, a cross-fusion technique that combines lidar and camera data using an encoder-decoder-based model is proposed. The multi-modal architecture fuses different sources of information to circumvent encountered limitations. The considered perception task is semantic segmentation of the different obstacles that may be encountered. The decision-making part of the architecture is extended with the evidence theory, introducing belief functions that contribute in handling uncertainties. Thus, the evidential formulation is versatile and yields more precise predictions and a better understanding of the vacuous data. The dataset used in this work employs the KITTI dataset for semantic segmentation. The results show the interest of integrating evidential theory into neural networks fusing information from two heterogeneous sensors.

Downloads

Published

13.05.2025