Natural Language Generation (NLG) is the task of translating machine-readable representations and data into human language, and thus vital for accountability in safe human-machine collaboration. Neural Network architectures for NLG are promising since they able to capture linguistic knowledge through latent representations using raw input data, and hence have the benefit of simplifying the design of systems by avoiding costly manual engineering of features, with the potential of more easily scaling to new data and domains.
The training phase in Deep Learning is very compute and data intensive and, therefore, the efficiency of the training phase typically restricts the quality of results that can be achieved within a given time frame.
Many learning algorithms are dominated by the speed in which data can be brought to the CPU, i/e., by the memory bandwidth of the executing hardware. Consequently, techniques that are based on reduced precision number representations have been shown to produce faster results without a significant loss in the quality of results.
There are many outstanding research problems in developing spoken conversational Natural Language interfaces for effective human-robot interaction (HRI). For example:
When robots are co-located with people then safe interaction between them requires that the robot has reliable and up to date information about its environment and the locations of all persons and objects within it. The safety of any actions undertaken by the robot need to be assessed against the current state of its world and so the robot’s model of the world must be regularly updated.
Ambiguity & Vagueness are pervasive in human conversation; and their detection and resolution via clarificational dialogue is key to collaborative task success. State-of-the-art HRI cannot handle this, thus increasing risk of miscommunication in human-machine collaboration, especially in safety-critical environments.
For humans to collaborate effectively and safely on shared tasks in human-machine teams, they need to be able to develop a trusting, working relationship. To do this, partners need to have a mutual understanding of the task and understand what the other party can and can’t do.
For humans to collaborate effectively and safely on shared tasks in human-machine teams, they need to be able to develop a trusting, working relationship. To do this, partners need to have a mutual understanding of the world and the task at hand. As robotic systems become more human-like and autonomous, this relationship with humans becomes more important; because once established, human-machine teams will need to be more efficient and more robust to be able to safely handle problematic situations, for example, when failure occurs, be it on the system or human side.
This project will combine advances from epistemic planning (embodied in planners such as PKS) with high-level programming constructs enabling the representation of beliefs and stochastic information (such as the ALLEGRO language) for the purpose of collaborative task planning.
The aim of this project is to reduce uncertainty and thereby increasing efficiency of human robot teams in shared spaces. This will be accomplished through technology such as modelling and predicting typical human behaviour through stated of the art machine learning approaches (e.g. deep learning), wearable sensors (IoT) and smart planning strategies (e.g. via a digital twin of the real-world system). Through using wearable this project believes to develop low cost alternatives to high precision human tracking with expensive sensors in shared spaces.