Robust Scene Perception for Drones in Degraded Visual Environments

Develop a robust metric and semantic mapping system for drones under indoor visual degradation.
Description of the Project: 

Background: The global unmanned aerial vehicles (UAVs) market has been rapidly increased in recent years. These aircrafts find their use cases in a variety of civilian and scientific applications, ranging from disaster response and asset monitoring to geological investigation and archaeology. For example, Parisian firefighters used drones to track the progression of the Notre Dame fire and to find the best positions to aim fire hoses. By flying drones over the Notre Dame cathedral, firefighters were able to get essential data for taming the flames.  A key enabler to many applications of drones lies in scene perception, an ability of the aerial vehicle to create accurate representations about its ambient environment through exteroceptive sensors.  

Challenges:  Thanks to the revival of deep learning, great advances have been recently achieved for the scene perception of drones. Nevertheless, the majority of perception methods are often designed for cameras or similar optical sensors on the drone which can be brittle under crucial yet degraded visual environments, e.g., the smoke-filled firefighting scenes and the underground cave exploration in darkness or dimness. Although other RF sensors or infrared sensors are known for insusceptible to visual degradation, their corresponding perception algorithms remain under-explored and often unsuitable for the agile UAVs with fast moving speeds.  Moreover,  these issues are exacerbated by the drone’s limited payloads and tight on-board computation budget. 

Goals:  To robustly perceive the environment under low-visibility, this project will explore a novel scene perception system designed for drones. The new system would feature a set of emerging lightweight sensors intrinsically robust against visual degradation (e.g., single-chip millimeterwave radars, thermopile arrays, event cameras and solid-state LiDARs) and resource-aware computation pipeline adaptive to the drone’s capability.  Focusing on two pillar tasks of scene perception, we would like drones to attain reliable performance in (1) 3D metric mapping and (2) semantic mapping under adverse visual conditions. Concretely, this project aims to achieve the following objectives: 

  1. Explore and identify a set of sensors that are able to collectively deal with different types of visual degradation (e.g., airborne particles, darkness or bright stimuli) and also be payload-friendly.  
  1. With the identified sensor set, design a 3D scene reconstruction (metric mapping) solution that can be robust against a diversity of degraded visual environments.    
  1. Develop a semantic or instance segmentation method on the reconstructed scenes based on deep neural networks and end-to-end sensor fusion. 
  1. Optimize the above metric mapping and semantic mapping modules such that the final model can adaptively operate on the resource-constrained drones in an anytime prediction fashion.  

The usefulness of the proposed method will be evaluated on the above two tasks for mapping accuracy and real-time performances. Field evaluation in real degraded visual environments will be included in this project. 

Resources required: 
A Drone equipped with several lightweight sensors and an on-board computer. They are already in the Robotarium and the supervisor’s lab.
Project number: 
100020
First Supervisor: 
University: 
University of Edinburgh
Second Supervisor(s): 
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Skills: Linux, C++, Python, Tensorflow, PyTorch Knowledge: 3D Reconstruction, SLAM, Deep Neural Networks
Desirable skills and knowledge: 
Research experience in UAV applications, including mapping, tracking, detection or segmentation. Skilled in ROS development for real-world robots.
References: 

[1] Lu, C. X., et. al. See Through Smoke: Robust Indoor Mapping with Low-cost mmWave Radar. ACM MobiSys 2020. 

[2] Chen, C., Rosa, S., Miao, Y., Lu, C. X., Wu, W., Markham, A., & Trigoni, N. (2019). Selective sensor fusion for neural visual-inertial odometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 

[3] Saputra, M. R. U., de Gusmao, P. P., Lu, C. X., et al. (2020). Deeptio: A deep thermal-inertial odometry with visual hallucination. IEEE Robotics and Automation Letters. 

[4] Stambler, A., Spiker, S., Bergerman, M., & Singh, S. (2016, May). Toward autonomous rotorcraft flight in degraded visual environments: experiments and lessons learned. In Degraded Visual Environments: Enhanced, Synthetic, and External Vision Solutions 2016 (Vol. 9839, p. 983904). International Society for Optics and Photonics.