Robotic applications spread to a variety of application domains, from autonomous cars and drones to domestic robots and personal devices. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception. They must be realised as verifiable properties of software and hardware. Consider the following policy: a self-driving car must never break the highway code.
AnKobot is sponsoring an exciting PhD project in the field of mobile autonomy using visual Simultaneous Localization and Mapping (SLAM), semantic scene understanding and computer vision.
Deep Neural Network (DNN) technologies coupled with GPU type hardware provide practical methods for learning complex functions from vast datasets. However, their architectures are often developed using trial and error approaches and the resulting systems normally provide ‘black box’ solutions containing many millions of learnt but abstract parameters. They are therefore extremely difficult to interpret and understand, and their accuracy and certainty of prediction, or classification, are normally not known.
Natural Language Generation (NLG) is the task of translating machine-readable representations and data into human language, and thus vital for accountability in safe human-machine collaboration. Neural Network architectures for NLG are promising since they able to capture linguistic knowledge through latent representations using raw input data, and hence have the benefit of simplifying the design of systems by avoiding costly manual engineering of features, with the potential of more easily scaling to new data and domains.
There are many outstanding research problems in developing spoken conversational Natural Language interfaces for effective human-robot interaction (HRI). For example:
Ambiguity & Vagueness are pervasive in human conversation; and their detection and resolution via clarificational dialogue is key to collaborative task success. State-of-the-art HRI cannot handle this, thus increasing risk of miscommunication in human-machine collaboration, especially in safety-critical environments.
For humans to collaborate effectively and safely on shared tasks in human-machine teams, they need to be able to develop a trusting, working relationship. To do this, partners need to have a mutual understanding of the task and understand what the other party can and can’t do. This is known as the ‘Double Empathy Problem’ in autism research (Milton, 2012).
This project will combine advances from epistemic planning (embodied in planners such as PKS) with high-level programming constructs enabling the representation of beliefs and stochastic information (such as the ALLEGRO language) for the purpose of collaborative task planning.
Practical autonomous and semi-autonomous robots need automated plan generation if they are to achieve even simple missions with minimal human intervention. Such plans need to be communicated interactively, simply and quickly to human supervisors in a way that allows them to rapidly assess performance and risk. This is especially the case for mixed robot/human teams.