Jun Hao Alvin Ng
Robots operating in dynamic and uncertain environment involve complex interactions that are difficult to model, particularly in determining probabilities for probabilistic action models which are used in planning. Incorrect actions may cause execution or plan failure which can be catastrophic for robots. While failures can be handled by integrating planning with continuous monitoring and feedback to perform plan repair or replanning, it does not solve the problem entirely as the incorrect action models are used to generate the next plan.
This motivates our work on model learning where the action models are learned incrementally and continually over the operating lifespan of a robot. The goal of this project is to develop robust and adaptive autonomy for robots operating in dynamic and non-stationary environments. As the robot executes actions and interacts with its environment, multi-modal sensory measurements monitoring the state of the robot are acquired and serve as training data. Model-based reinforcement is used to balance between exploration and exploitation. Sample-efficient approaches are necessary for practical implementation in the real-world. This can be achieved by using intrinsic motivation or active learning. Furthermore, assumptions on the structure of the planning problem can reduce the size of the state space. Lastly, reasonable assumptions on the availability of prior knowledge provides a bias to the learning problem.
I am interested in field robotics, machine learning, and autonomous decision making.