Multimodal fusion for large-scale 3D mapping

The project will explore the combination of 3D point clouds with imaging modalities (colour, hyperspectral images) via machine learning and computer graphics to improve the characterization of complex 3D scenes.
Description of the Project: 

Lidar point clouds have been widely used to segment large 3D scenes such as urban areas and vegetated regions (forests, crops, …), and to build elevation profiles. However, efficient point cloud analysis in the presence of complex scenes and partially transparent objects (e.g, forest canopy) is still an unsolved challenge.

In this thesis, you will explore how more efficient 3D characterization can be achieved using multiple modalities, including multiple Lidar sensors, coupled with colour and/or multispectral data. While the lidar point clouds are generally dense, they do not provide information about the colour/content of the scene, which can be inferred from sets of 2D images.

From a methodological point of view, computer graphics, image processing and machine learning have been widely used independently to address such fusion problems. In this thesis, you will develop new solutions combining tools from these different communities for robust and scalable multimodal fusion.

While this fusion problem applies to a variety of applications, including autonomous cars and indoor robotics, this work will focus on earth observation.

Project number: 
First Supervisor: 
Heriot-Watt University
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Strong programming skills (Matlab and C++) Background in data science (optimization, machine learning, applied mathematics)
Desirable skills and knowledge: 
Background in computer graphics Parallel / CUDA programming
Funding Available: