The vast majority of applications of robots do not involve just one isolated task (such as grasping an object) but instead carefully choregraphed sequences of such tasks, with dependencies between tasks not just in terms of what comes after what but also how the previous task should be performed (in a quantitative sense at the level of motor control) in order to set up for the next. Moreover, there are numerous subjective variables in these tasks, e.g., how close should it come to a delicate object or how hard should it pull on a cable?
Robotic applications spread to a variety of application domains, from autonomous cars and drones to domestic robots and personal devices. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception. They must be realised as verifiable properties of software and hardware. Consider the following policy: a self-driving car must never break the highway code.
A large variety of robotic applications strongly involve handling various objects as the core process for task completion. To date, most of these jobs are still performed by people. Although some are automated by robots, those solutions primarily rely on pre-designed rules or tele-operation (limited operational time due to cognitive overload), which unavoidably limits the performance in changing environments. This project consists of multiple challenging research topics in robotic manipulation.
Pulmonary infiltrates such as pus, blood, or protein, which lingers within the parenchyma of the lungs are the leading cause of pneumonia, tuberculosis. Pulmonary infiltrates in mechanically ventilated (MV) critically ill patients in the intensive care unit (ICU) are a major diagnostic challenge and due to the poor sampling methods available.
This project will explore how to model dialogue phenomena in visual dialogue and how these phenomena contribute to task
Visual dialog involves an agent to hold a meaningful conversation with a user in the context of an image. These systems find potential application in VR technology, dialog based image retrieval and agents which can provide viable information to visually-impaired people about an image or other visual content.
There is a critical need for robust and accurate tools to scale up biodiversity monitoring and to manage the impact of anthropogenic change. For example, the monitoring of individual species that are particularly sensitive to habitat conversion and climate change can act as an important indicator of ecosystem health. Existing approaches for collecting data on individual species in the wild have traditionally been based on manual surveys performed by human experts.
The shape and 3D structure of the world provides us with rich signal that enables us to interact with objects and to navigate in novel and dynamic environments. Despite the importance of this information to human visual reasoning it still remains largely underutilized in modern deep learning based semantic image understanding pipelines commonly used in robotics. For example, current best performing approaches for object classification in images are predominantly based on heavily supervised feedforward convolutional neural networks.
The coming decades will see the creation of fully autonomous vehicles (AVs) capable of driving without human intervention. Among the expected benefits of AVs are a significant reduction in traffic incidents, congestion, and pollution, while dramatically improving cost-efficiency.
AnKobot is sponsoring an exciting PhD project in the field of mobile autonomy using visual Simultaneous Localization and Mapping (SLAM), semantic scene understanding and computer vision.