AnKobot is sponsoring an exciting PhD project in the field of mobile autonomy using visual Simultaneous Localization and Mapping (SLAM), semantic scene understanding and computer vision.
Deep Neural Network (DNN) technologies coupled with GPU type hardware provide practical methods for learning complex functions from vast datasets. However, their architectures are often developed using trial and error approaches and the resulting systems normally provide ‘black box’ solutions containing many millions of learnt but abstract parameters. They are therefore extremely difficult to interpret and understand, and their accuracy and certainty of prediction, or classification, are normally not known.
Natural Language Generation (NLG) is the task of translating machine-readable representations and data into human language, and thus vital for accountability in safe human-machine collaboration. Neural Network architectures for NLG are promising since they able to capture linguistic knowledge through latent representations using raw input data, and hence have the benefit of simplifying the design of systems by avoiding costly manual engineering of features, with the potential of more easily scaling to new data and domains.
The training phase in Deep Learning is very compute and data intensive and, therefore, the efficiency of the training phase typically restricts the quality of results that can be achieved within a given time frame.
Many learning algorithms are dominated by the speed in which data can be brought to the CPU, i/e., by the memory bandwidth of the executing hardware. Consequently, techniques that are based on reduced precision number representations have been shown to produce faster results without a significant loss in the quality of results.
There are many outstanding research problems in developing spoken conversational Natural Language interfaces for effective human-robot interaction (HRI). For example:
When robots are co-located with people then safe interaction between them requires that the robot has reliable and up to date information about its environment and the locations of all persons and objects within it. The safety of any actions undertaken by the robot need to be assessed against the current state of its world and so the robot’s model of the world must be regularly updated.
Ambiguity & Vagueness are pervasive in human conversation; and their detection and resolution via clarificational dialogue is key to collaborative task success. State-of-the-art HRI cannot handle this, thus increasing risk of miscommunication in human-machine collaboration, especially in safety-critical environments.
For humans to collaborate effectively and safely on shared tasks in human-machine teams, they need to be able to develop a trusting, working relationship. To do this, partners need to have a mutual understanding of the task and understand what the other party can and can’t do.
For humans to collaborate effectively and safely on shared tasks in human-machine teams, they need to be able to develop a trusting, working relationship. To do this, partners need to have a mutual understanding of the world and the task at hand. As robotic systems become more human-like and autonomous, this relationship with humans becomes more important; because once established, human-machine teams will need to be more efficient and more robust to be able to safely handle problematic situations, for example, when failure occurs, be it on the system or human side.