Version 1
: Received: 15 September 2022 / Approved: 19 September 2022 / Online: 19 September 2022 (10:27:42 CEST)
How to cite:
Tibebu, H.; De-Silva, V.; Artaud, C.; Pina, R.; Shi, X. Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation. Preprints2022, 2022090276. https://doi.org/10.20944/preprints202209.0276.v1
Tibebu, H.; De-Silva, V.; Artaud, C.; Pina, R.; Shi, X. Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation. Preprints 2022, 2022090276. https://doi.org/10.20944/preprints202209.0276.v1
Tibebu, H.; De-Silva, V.; Artaud, C.; Pina, R.; Shi, X. Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation. Preprints2022, 2022090276. https://doi.org/10.20944/preprints202209.0276.v1
APA Style
Tibebu, H., De-Silva, V., Artaud, C., Pina, R., & Shi, X. (2022). Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation. Preprints. https://doi.org/10.20944/preprints202209.0276.v1
Chicago/Turabian Style
Tibebu, H., Rafael Pina and Xiyu Shi. 2022 "Towards Interpretable Camera and LiDAR data fusion for Unmanned Autonomous Vehicles Localisation" Preprints. https://doi.org/10.20944/preprints202209.0276.v1
Abstract
Recent deep learning frameworks draw a strong research interest in the application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on a single sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2, using multiple sensors including camera, Light Detecting And Ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation which is used to visualise the network's learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relation between consecutive time steps. We use the LboroAV2 and KITTI VO datasets to experiment and evaluate our results. In addition to visualising the network's learning process, our approach gives superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.
Keywords
Sensor fusion; Camera and LiDAR fusion; Odometry; Explainable AI
Subject
Computer Science and Mathematics, Computer Science
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.