Spatial Modelling to support Contextually Aware User Interfaces

Develop novel spatial modelling algorithms and implementations on the edge for real-time context understand for robots including autonomous vehicles
Description of the Project: 

Human-robot interaction requires building a joint understanding of context, facilitating collaboration naturally and seamlessly on tasks, e.g. by joint goal setting, communicating progress or clarifying the user’s intention. To achieve the ability of natural command, control, and feedback in real world scenarios requires the construction of user interaction models supported by spatial modelling and reasoning, that can link a detailed digital landscape to real world concepts. 

This Ph.D will develop new spatial models which use both map data (e.g. buildings, roads) and sensed data (e.g. traffic flow, weather, camera feeds) of real world features as reference objects in a joint human-robot understanding of context. The research will include real time visibility modelling at city scale which combines local high resolution sensor inputs (e.g. object recognition and depth data from stereo cameras) with more generalised large extent models (e.g. Digital Surface Models), and vector map products (e.g. OS Master Map). GPGPU computing on the edge will be used to achieve these goals in real-time. 

The overall aim of the project is to develop the software and algorithms as a service which can effectively combine this range of data at different dimensionalities and scales to provide a constantly refreshed digital twin which in turn can support a range of user interfaces. Some of the inputs to the model will be from real time sensors (e.g. stereo depth camera, LiDAR) and some from more static sources (e.g. Digital Surface Model, map data). This will provide a service which a range of interfaces can connect to, for example a speech based interface that can generate suitable referring expressions for landmarks and features surrounding the robot / autonomous vehicle.  

Autonomous robots that can communicate about their context and local environment would be able to support humans in range of roles from rapid delivery of medical equipment via drones, to autonomous cars, to disaster emergency response. 

Resources required: 
Jetson Xavier NX dev kit Stereo Camera
Project number: 
124026
First Supervisor: 
University: 
Heriot-Watt University
First supervisor university: 
Heriot-Watt University
Essential skills and knowledge: 
Programming (e.g. Python) GIS, spatial analysis (e.g. QGIS, PostgreSQL + PostGIS, GeoPandas, Numpy) Machine learning / AI
Desirable skills and knowledge: 
Computer Vision (e.g. OpenCV, Yolo) NLP HCI
References: 

[1] Bartie P, Mackaness W, Lemon O, Dalmas T, Janarthanam S, Hill R, Dickinson A, Liu X (2017) – A Dialogue Based Mobile Virtual Assistant for Tourists:The SpaceBook Project, Computers Environment and Urban Systems, 67 

[2] Bartie, P. J., Reitsma, F., Kingham, S. & Mills, S., (2010) Advancing Visibility Modelling Algorithms for Urban Environments. Computers Environment and Urban Systems , 34(5): 518-531.