Autonomous systems have access to an increasing number of sensors. Think of city-wide networks of cameras or the amount of sensory data collected by an autonomous car. In many applications it is not feasible to process all of the data streams in real-time because of limited computation or badnwidth of the network.
In my research I intend to examine and further methods for agents that reason about costs and benefits of learning. The examples of questions such agents might consider are the following:
- Which sensory data is most relevant to the learning problem at hand and therefore should be processed first?
- How useful would it be to have a more accurate model of the world? i.e. how much utility can be gained?
- How costly is it to improve the accuracy of a given model? i.e. how much computation or experimentation is required?
- Can we use knowledge in one domain to further our understanding in another, thereby decreasing the amount of data required?
To address these questions various subfields of machine learning and artificial intelligence will be explored. The areas include:
- representation learning
- active exploration in reinforcement learning
- active learning and sensor management
- lifelong learning
The problems will likely be approached from a probabilistic and information theoretic perspectives. The methods will be tested in increasingly more challenging scenarios, starting with simulated worlds, such as those offered by OpenAI Gym project, and moving to real world problems, such as networks of sensors.
I have a BEng in Electrical Engineering focused on Signal Processing and Embedded Systems. In my Master year I focused on expanding my knowledge in Machine Learning -- especially statistical methods.
I am intersted in agents reasoning about learning and acquiring information. To explore this I study AI from different perspectives. Both in idealistic game playing environments and simulations, as well as in realistic robotics problems.
Furthermore, I enjoy thinking and debating about philosophy of science, morality, especially in the context of AI.