Multimodal fusion for large-scale 3D mapping
Lidar point clouds have been widely used to segment large 3D scenes such as urban areas and vegetated regions (forests, crops, …), and to build elevation profiles. However, efficient point cloud analysis in the presence of complex scenes and partially transparent objects (e.g, forest canopy) is still an unsolved challenge.
In this thesis, you will explore how more efficient 3D characterization can be achieved using multiple modalities, including multiple Lidar sensors, coupled with colour and/or multispectral data. While the lidar point clouds are generally dense, they do not provide information about the colour/content of the scene, which can be inferred from sets of 2D images.
From a methodological point of view, computer graphics, image processing and machine learning have been widely used independently to address such fusion problems. In this thesis, you will develop new solutions combining tools from these different communities for robust and scalable multimodal fusion.
While this fusion problem applies to a variety of applications, including autonomous cars and indoor robotics, this work will focus on earth observation.