This research project aims at real-time learning and its application in learning the sign language from the fellow signer while solving a task cooperatively. The learner is expected learn how to react to fellow signer’s signs that contain information about the task domain.
The task domain is the HCRC Map Task . The map task consists of two subjects trying to find a common way on slightly different maps to reach point B from point A on the map through cooperation.
The research focuses on acquisition of the vision-based data with depth information to extract 1) hand shape, 2) palm orientation, 3) location of the hand, 4) hand movement, and 5) facial expressions from the fellow signer that constitutes five parameters of the sign language .
This research project is a continuation of the previous research in gesture recognition with classification  and real-time gesture learning using lookup table . First, it is planned to extend the previous work  and  by selecting an appropriate feature extraction technique that will be sufficient for such detailed features as the ones that can be found in the sign language. Second, different real-time learning techniques will be investigated during the course of this research and how these techniques can be applied to sign lagnuage learning.
 A. H. Anderson, M. Bader, E. G. Bard, E. Boyle, G. Doherty, S. Garrod, S. Isard, J. Kowtko, J. McAllister, J. Miller et al., “The hcrc map task corpus,” Language and speech, vol. 34, no. 4, pp. 351–366, 1991.
 B. Mocialov, P. A. Vargas, and M. S. Couceiro, “Towards the evolution of indirect communication for social robots,” in press.
 B. Mocialov, P. A. Vargas, and M. S. Couceiro, “Real-time vision-based gesture learning for human-robot interaction in social humanoid robotics,” 2016. MSc thesis.
 W. C. Stokoe, “Sign language structure,” Annual Review of Anthropology, vol. 9, pp. 365–390, 1980.