Lifelong Learning for Vision based AUV Control
**Note: Project availability subject to collaboration agreement being signed**
Rovco (www.rovco.com ) is a subsea services and R&D company, with a focus on computer vision, robotics and AI. They develop and deploy live 3D vision and ML systems to deliver state of the art underwater survey services globally.
70% of the earth is covered with water and yet we have only mapped 5% of it. We have OS maps of the Moon and Mars and almost no maps of the subsea. Every deep sea cruise discover new molecules and species living in unchartered areas of the oceans. Inspection of subsea assets, forecast to become a primary source of power (offshore wind) and food (subsea agriculture) suffer from the same problem: how can we cheaply map and inspect the environment. The solution has to be through the development of autonomous solutions (drones) equipped with state of the art sensing and data interpretation.
The future of underwater systems is autonomy, and part of this future is control in a challenging, changing environment.
The research area for this project focuses on learnt control for Autonomous Underwater Vehicles (AUVs). For Remotely Operated Vehicles, a human operator must learn and adapt to changing water conditions, visibility and changing vehicle dynamics – be it due to system degradation, changing payload or other reasons. To enable reliable control of an AUV in real-world scenarios, learnt control policies offer the potential to bring increased robustness and online adaptation to these problems. The focus is on visual sensors as they are common on ROV/AUV, provide a rich source of environment and pose information, are required for many close inspection/intervention tasks and there is relevance to Rovco’s existing R&D work. Although vision is a focus, other sensors are not excluded.
Learning for robotic control is not a new area of research, but the recent rise in powerful applied machine vision techniques opens up new potential for work combining vision and learnt control for real world tasks. For example, this recent article (https://alexgkendall.com/reinforcement_learning/now_is_the_time_for_reinforcement_learning_on_real_robots/ ) summarises many of the challenges and successes applying reinforcement learning to robotics. To specifically consider vision as an input, metrics for evaluating pose estimation, object detection or reconstruction methods may not be the best metrics to optimise as inputs for robotic control. It may not be necessary to be able to accurately understand the structure of a scene to control thrusters to accurately move the robot.
The key outputs of this project are envisaged to be along the lines of algorithms and implementations for control of AUV/ROV systems from visual inputs using a learnt approach to bring benefits over a hand-coded control scheme. For example it could be enabling accurate, stable, motion control that works on a variety of platforms, or platforms with changing payload, or control robust to different water/current conditions etc. This could be learnt end to end approaches from vision to control given high level control objectives (e.g. “hold position”, “move right” or even “explore”) or it might be approaches dependent on noisy pose/environment sensing from other existing algorithms.
A very successful PhD project would see the development of promising implementations, tested in the real world. These would then be a clear route to commercialisation by Rovco, leading to eventual deployment.