Dyson is a leading global consumer product company, with its UK campus based near Bristol. Dyson is currently investing heavily in Robotics and this project would be industrially supervised within Dyson’s Robotics Research group.
Digital twins are increasingly becoming a choice, if not a trend to forecast, operate and manage complex systems. However, all digital twins are derived from human defined models about the environment or the physical asset. That is, the human defines the digital proxies, the virtualised environment, the technology, etc. This project questions the potential of machines and/or networks to derive its own digital model and workspace. In particular, the emphasis on systems that are beyond human visual sight and where communication synchronicity is not guaranteed, i.e.
Human-robot interaction requires building a joint understanding of context, facilitating collaboration naturally and seamlessly on tasks, e.g. by joint goal setting, communicating progress or clarifying the user’s intention. To achieve the ability of natural command, control, and feedback in real world scenarios requires the construction of user interaction models supported by spatial modelling and reasoning, that can link a detailed digital landscape to real world concepts.
Critical illness can affect individuals at any age and for a wide range of medical and surgical conditions. Recovery can be prolonged, and complicated by fatigue, impaired attention and limited engagement with rehabilitation for physical and mental health reasons. Socially assistive robots provide an opportunity for bespoke rehabilitation programmes to be developed by health care professionals, then delivered by the robot, from the time of recovery from critical care, through the rest of the inpatient journey, to the transition home.
Interactions with current AI agents (e.g., Amazon Alexa, Google Home, Apple Siri) are limited to single-turn simple tasks such as asking about the weather, listening to a song, or telling a joke. What they are currently lacking is a more in-depth multi-turn conversation on wider domains (e.g., talking about the news) that entail follow-up questions, retrieving information from knowledge bases (e.g., WikiData), texts (e.g., news articles) and performing common-sense reasoning (e.g., if-then clauses).
In this project, we will explore a Neurorobotics approach to develop robotic controllers to scenarios which behavioural responses must be fast and precise whilst limited to strict energy constraints. This will be accomplished by the use of Evolving Spiking Neural Networks (ESNN) models.
Results from this project would be applicable to robotics in search and rescue missions, dangerous scenarios, critical operations and a plethora of human-robot-interaction scenarios, where enhanced autonomy, robustness and rapid responses are of vital importance.