The most popular examples of conversational AI systems are likely to be voice assistants such as Amazon Alexa or Google Assistant. They offer a wide range of functionalities such as answering questions, controlling smart home devices, playing music, setting an alarm, and checking the weather. Other systems are capable of engaging in complex conversations about various topics or are designed to act as speech interfaces of social robots. More advanced systems are also able to process visual input signal and talk about items present in their physical surroundings. All of these systems, however, apart from many other limitations, suffer from inability to interact with multiple people at the same time.
Today's conversational agents are unable to differentiate between human speakers or identify the addressee of a given utterance. They also lack the ability to understand complex social situations and adapt their behaviour accordingly. These are only a few examples of numerous challenges still waiting to be overcome. So far, very few papers have been written on multi-party dialogue which creates an opportunity to make a significant contribution to the field.
In my research, I design, develop, and evaluate multi-party multimodal conversational AI systems with physical embodiments. I particularly focus on the Dialogue Management component and ways to optimise it using Reinforcement Learning algorithms.
In May 2019, I graduated with Master's degree in Computer Science from Adam Mickiewicz University in Poznań. Back then, I was a regular attendee of a weekly seminar of the Department of Natural Language Processing (now transformed into the Department of Artificial Intelligence) at the Faculty of Mathematics and Computer Science.
At that time, I had also worked as an Assistant Engineer on various Natural Language Processing tasks for Samsung's voice assistant Bixby for over 2 years.
In August 2019, I moved to Edinburgh to work as a Research Assistant in the Interaction Lab at Heriot-Watt University (HWU). Currently, I am a member of the HWU team involved in the EU H2020-ICT SPRING (Socially Pertinent Robots in Gerontological Healthcare) research project. The overall objective of this project is to develop Socially Assistive Robots (SARs) with the capacity of performing multi-person interactions and open-domain dialogue and validate them based on the needs of gerontological healthcare. My role in the project is to adapt an existing conversational AI system so that it handles multi-party interaction.