Robust Ego-Motion Estimation with Dynamic Deep Sensor Fusion

Explore multi-sensor solutions to robust 6-DoF egomotion estimation by combining the strengths of the Bayesian Learning and Adversarial Deep Neural Networks.
Description of the Project: 

Background: Mobile robots are rapidly becoming our everyday objects. Today's autonomous vehicles (AV), such as Google Waymo One, Baidu Apollo Go and self-driving trucks of  TuSimple's have already been operating on public roads and earning revenues. In the meantime,  Amazon and Walmart are ready to use drones for delivery services within this year while unmanned aerial vehicles (UAVs) have been widely used for indoor search and rescue to protect first-responders from hazards. A key enabler to the above autonomous mobility is egomotion estimation, an ability to determine the relative translation and orientation of the ego-vehicle with relation to its own state over time by merely analysing sensory data. 

Challenges:  With the main focus placed on accuracy, multimodal egomotion estimation has been a hot research topic for more than decades and keeps evolving with the increasingly diverse sensors on various mobile platforms. However, prior works on egomotion estimation are either limited by the complex domain knowledge of sensors and error-prone hand-tuning models (e.g., model-based methods), or generalize poorly to unseen scenes and lack proper uncertainty treatment under dynamics (e.g., data-driven methods). 

Goals:  Towards robustly estimating the egomotion of mobile robots, this project will explore a dynamic multi-sensor fusion method that draws elements from both model-based and data-driven methods. Beyond accuracy, this project advocates a design robust against (1) complicated environment dynamics in the wild, (2) widespread sensor malfunction and failures and (3) intentional malicious attacks. Concretely, we would to achieve the following ambitious objectives in this project: 

  1. Design an end-to-end trainable model that combines Bayesian Filtering and DNNs in order to avoid model handcrafting and encourage physical-aware model behaviours at the same time.  
  1. Explicitly model the uncertainty of subnets and propagate the uncertainty along with state evolving. 
  1. Study multiple adversarial learning strategies so that the final model after training is able to handle both intentional sensor attacks and unintentional adverse scenarios in the wild. 
  1. Derive the theoretical guarantee of the final estimation result and explore ‘fail-safe’ solutions if needed.  

The usefulness of the proposed method will be generically evaluated in terms of their ability to react to challenging environments, including public self-driving car datasets and an in-house data collection by unmanned aerial vehicles.

Resources required: 
A Drone equipped with several lightweight sensors and an on-board computer. They are already in the Robotarium and the supervisor’s lab.
Project number: 
140028
First Supervisor: 
University: 
University of Edinburgh
First supervisor university: 
University of Edinburgh
Essential skills and knowledge: 
Skills: Linux, C++, Python, Tensorflow, PyTorch Knowledge: SLAM, Bayesian Filtering, Factor Graph, Deep Neural Networks
Desirable skills and knowledge: 
Solid understanding or research experience in millimetre-wave radars, ultrasonic sensors, solid-state LiDARs. Skilled in ROS usage on real-world robots. Hands-on model design and training experience with CNN, RNN or GCN.
References: 

[1] Lu, C. X., et. al. Single-chip mmWave Radar Aided Egomotion Estimation via Deep Sensor Fusion. SenSys 2020 

[2] Chen, C., Rosa, S., Miao, Y., Lu, C. X., Wu, W., Markham, A., & Trigoni, N. (2019). Selective sensor fusion for neural visual-inertial odometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 

[3] Saputra, M. R. U., de Gusmao, P. P., Lu, C. X., et al. (2020). Deeptio: A deep thermal-inertial odometry with visual hallucination. IEEE Robotics and Automation Letters. 

[4] Loquercio, A., Segu, M., & Scaramuzza, D. (2020). A general framework for uncertainty estimation in deep learning. IEEE Robotics and Automation Letters.