Dr. Alessandro Suglia

Research interests

How can we teach machines to communicate with humans and learn from language instructions, just like we do? Due to the complexity of this research question, I embrace a multi-disciplinary research agenda based on the following macro themes:

Vision and Perception

I am fascinated by the concept of grounded cognition according to which conceptual representations are a result of fusing multiple sources of perceptual information. Specifically, I’m interested in agents that can learn perceptual representations that are effective in downstream tasks involving high-order reasoning skills such as situated dialogue and language-guided task completion for embodied agents.

NLP

My research agenda aims at learning word representations that can truly uncover their meanings. More broadly, I’m very interested in learning language representations that are grounded in perceptual experience. Such representations can then be transferred to other tasks such as language-guided task completion as well as other downstream tasks requiring commonsense knowledge.

Machine Learning and AI (inc. multi-agent systems)

I’m interested in developing robots that can learn multimodal representations from interaction with the world and with other agents. To implement such agents, we require sophisticated learning algorithms that facilitate learning from several supervision signals. In this interactive learning paradigm, several learning techniques are essential such as reinforcement learning and continual learning.

Human-Robot Interaction

The result of the fundamental research that I conduct at the intersection between Perception, NLP and Machine Learning, is fundamental to develop robots that can develop a symbiotic relationship with humans. Particularly, I’m interested in pushing the boundaries of HRI by moving towards Human-Robot Collaboration, a field in which humans and robots communicate to achieve common ground and improve each other’s skills. If you think your idea fits well with any of these themes, please feel free to get in touch.

Research keywords: 
Embodied AI, NLP, Language Grounding, Deep Learning, Multimodal Machine Learning, Dialogue Systems
Theme: 
Vision and Perception
Human Robot Interaction
NLP
Machine Learning and AI (inc. multi-agent systems)
Email (optional - published on profile page):