Prof. Verena Rieser

Verena Rieser is a Professor in Computer Science at Heriot-Watt University, Edinburgh, where she is affiliated with the Interaction Lab.   She is also a co-founder of the Conversational AI company Alana AI.  Verena received her PhD in 2008 from Saarland University and then joined the University of Edinburgh as a postdoctoral research fellow, before taking up a faculty position at Heriot-Watt in 2011..  Her research focuses on machine learning techniques for spoken dialogue systems and language generation, where she has authored over 100 peer-reviewed papers.  Verena is the PI of several funded research projects and industry awards. She was recently awarded a Leverhulme Senior Research Fellowship by the Royal Society in recognition of her work in developing multimodal conversational systems.
Her most recent research efforts include the Amazon Alexa Prize, where her team reached the finals twice in a row and the End-to-End NLG Shared Task, which her team organised. She is currently one of the co-chairs of the BigScience community effort in building large language models.  

Potential Supervisor

My interests are interaction learning, speech and multimodal technology. Please see the following link

I specialize in data-driven machine learning approaches for sequential decision making as applied to interactive systems, including human-robot interaction, spoken dialogue systems (aka. conversational agents), multimodal output generation, decision support and multi-agent systems for computational sustainability.  I currently supervise 5 PhD students and 3 postdoctoral researchers in these areas. 

Potential Research topics for MSc/EngD:
1.     Safe, trusted and bias-free ML: Develop advanced machine learning methods for end-user interaction.
2.    Cognitive Interfaces: Develop human-robot interfaces which can adapt to the cognitive load of the user, as estimated from multimodal sensor data.
3.    Deep Learning for Open Domain Interaction: Investigate Deep Neural Networks to learn optimal interaction control from unlabelled data.
4.    Situated Output Generation: Develop a data-driven framework for multi-modal output generation (including language) for Human-Robot Interaction.


Research keywords: 
NLP, deep learning, reinforcement learning, spoken and multimodal interaction
Vision and Perception
Human Robot Interaction
Machine Learning and AI (inc. multi-agent systems)
0131 451 4192
Email (optional - published on profile page): 

Related current research projects