Robust Ego-Motion Estimation with Dynamic Deep Sensor Fusion
Background: Mobile robots are rapidly becoming our everyday objects. Today's autonomous vehicles (AV), such as Google Waymo One, Baidu Apollo Go and self-driving trucks of TuSimple's have already been operating on public roads and earning revenues. In the meantime, Amazon and Walmart are ready to use drones for delivery services within this year while unmanned aerial vehicles (UAVs) have been widely used for indoor search and rescue to protect first-responders from hazards. A key enabler to the above autonomous mobility is egomotion estimation, an ability to determine the relative translation and orientation of the ego-vehicle with relation to its own state over time by merely analysing sensory data.
Challenges: With the main focus placed on accuracy, multimodal egomotion estimation has been a hot research topic for more than decades and keeps evolving with the increasingly diverse sensors on various mobile platforms. However, prior works on egomotion estimation are either limited by the complex domain knowledge of sensors and error-prone hand-tuning models (e.g., model-based methods), or generalize poorly to unseen scenes and lack proper uncertainty treatment under dynamics (e.g., data-driven methods).
Goals: Towards robustly estimating the egomotion of mobile robots, this project will explore a dynamic multi-sensor fusion method that draws elements from both model-based and data-driven methods. Beyond accuracy, this project advocates a design robust against (1) complicated environment dynamics in the wild, (2) widespread sensor malfunction and failures and (3) intentional malicious attacks. Concretely, we would to achieve the following ambitious objectives in this project:
- Design an end-to-end trainable model that combines Bayesian Filtering and DNNs in order to avoid model handcrafting and encourage physical-aware model behaviours at the same time.
- Explicitly model the uncertainty of subnets and propagate the uncertainty along with state evolving.
- Study multiple adversarial learning strategies so that the final model after training is able to handle both intentional sensor attacks and unintentional adverse scenarios in the wild.
- Derive the theoretical guarantee of the final estimation result and explore ‘fail-safe’ solutions if needed.
The usefulness of the proposed method will be generically evaluated in terms of their ability to react to challenging environments, including public self-driving car datasets and an in-house data collection by unmanned aerial vehicles.
 Lu, C. X., et. al. Single-chip mmWave Radar Aided Egomotion Estimation via Deep Sensor Fusion. SenSys 2020
 Chen, C., Rosa, S., Miao, Y., Lu, C. X., Wu, W., Markham, A., & Trigoni, N. (2019). Selective sensor fusion for neural visual-inertial odometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
 Saputra, M. R. U., de Gusmao, P. P., Lu, C. X., et al. (2020). Deeptio: A deep thermal-inertial odometry with visual hallucination. IEEE Robotics and Automation Letters.
 Loquercio, A., Segu, M., & Scaramuzza, D. (2020). A general framework for uncertainty estimation in deep learning. IEEE Robotics and Automation Letters.