Jun Hao Alvin Ng
Robots operating in dynamic and uncertain environment involves complex interactions that are difficult to model, particularly in determining probabilities for probabilistic action models which are used in planning. Incorrect actions may cause execution or plan failure which can be catastrophic for robots. While failures can be handled by integrating planning with continuous monitoring and feedback to perform plan repair or replanning, it does not solve the problem entirely as the incorrect action models are used to generate the next plan.
This motivates our work on statistical learning of action models using experiences from executing actions in the environment. Action models are learned incrementally as experiences are gained from executing actions. This requires the interleaving of learning, planning, and acting where knowledge of state transitions are used to update the action models. Furthermore, relational reinforcement learning is used to drive exploration and exploitation. Exploration seeks knowledge that yields useful experiences for learning action models while exploitation utilizes the learned action models to achieve the goal.
I am interested in field robotics, machine learning, and autonomous decision making.