Abstract:
Introduction of the assistive robot concept has created numerous ways to restore vital degrees of independence for the
elderly and disabled people on their Activities of Daily Living (ADL). The most important aspect of an assistive robot is
to understand the user’s intentions with minimum number of interactions. Based on these facts, in this study we suggest a
novel method to recognize the implicit intention of a human user, by using verbal communication, behavior recognition
and motion recognition from the combination of machine learning, computer vision and voice recognition technologies.
After recognizing the implicit intension of the user, the system will be able to identify the necessary objects from the
domestic area that is going to help the human user and point them out to fulfil his/her intention. By far, this study is
expected to simplify the human robot interaction (HRI) while consequently enhancing the adoption of assistive
technologies and improving the user’s independence in ADL. These findings will certainly help to guide future designs
on implicit intention recognition and activity recognition to an accurate intention inference algorithm and intuitive HRI