Explainable AI and Autonomy for the Maritime Domain
**Note: Project availability subject to collaboration agreement being signed**
SeeByte is sponsoring a PhD in the area of Human-Machine Interaction for safe, innovative and dynamic use of marine autonomous systems. These systems have varying autonomy and communication ability ranging from tethered remote controlled Remotely Operated Vehicles (ROV)s that have low autonomy, to completely autonomous systems that can go deep underwater but with limited comms, as well as autonomous surface and air vehicles with continuous communications.
This variability is a challenge in terms of the human operators’ situation awareness, trust and mental models of the various vehicles, and if and when the human is required to take over control or abort the mission. There is an increased need for unmanned and manned system to be able to co-operate and effectively team, requiring fluid, adaptive and continuous interactions between the operators and robots. This is a significant departure from current state-of-the-art Human-Machine-Interaction in the Maritime Robotics domain, where the operators typically pre-plans a fixed mission and is then consigned to the role of an observer.
Explainability of Autonomy and AI is a key factor in maintaining trust and facilitating adoption and this PhD will focus on natural language explanations with respect to SeeByte’s autonomy framework, that is deployed on maritime robotics worldwide and was the first commercial system to demonstrate multi-vehicle collaboration in the maritime domain.
Research questions include:
- What information should the vehicle convey and when depending on varying communication links?
- How can the systems explain their behaviour and what they can and can’t do in an easy in natural language to facilitate operations and training,
- How can we adapt explanations to the current context and user’s mental model?
- How should an operator interact effectively with multiple assets and what information is required before an operator can “delegate authority” to an autonomous system?
- Francisco J. Chiyah Garcia, David A. Robb, Atanas Ivaylov Laskov, Xingkun Liu, Pedro Patron, and Helen Hastie (2018). Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models through a Multimodal Interface. In Proceedings of the 11th International Conference of Natural Language Generation (INLG), Tilburg, The Netherlands. arXiv preprint
- D. A. Robb, J.S. Willners, N. Valeyrie, F.J.C. Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie and Y. Petillot (2018). A Natural Language Interface with Relayed Acoustic Communications for Improved Command and Control of AUVs. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicles Symposium, 2018. arxiv preprint
- David A. Robb, Francisco J. Chiyah Garcia, Atanas Laskov, Xingkun Liu, Pedro Patron, and Helen Hastie. 2018. Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface. In 2018 International Conference on Multimodal Interaction (ICMI ’18), October 16–20, 2018, Boulder, CO, USA [link]