Affordance refers to the action possibilities provided by the environment, which indicates various interaction strategies for objects. Although learning affordance is an important step towards embodied visual intelligence, existing techniques usually suffer from heavy supervision or sensing requirements.
In this research topic, rather than treat affordance as a manually supervised semantic segmentation task, we intend to let agents learn about interactions in two other ways: 1) watching demonstration videos of real human behavior. 2) exploring for interaction in a new unmapped 3D environment.
I am a first-year Ph.D. student at The University of Edinburgh, United Kingdom, majoring in Robotics and Autonomous Systems. Before that, I received the master’s degree in Electrical and Computer Engineering from Sungkyunkwan University, Suwon, South Korea, in 2020, and the bachelor’s degree in Electronic and Information Engineering from Xidian University, Xi’an, China, in 2018. My research interests lie in Deep learning and Computer Vision, Image Segmentation (semantic, instance, panoptic) and Meta Learning (few-shot learning).