Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping

09/21/2018
by   Ali Shafti, et al.
0

Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision making and actuating modalities, to create intuitive, human-in-the-loop assistive robotics. The system takes its cue from the user's gaze, to decode their intentions and implement lower-level motion actions and achieve higher level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and action grammars-based implementation of sequences of action through the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of 4.68±0.14cm. The full system is tested with 5 subjects, showing successful implementation of 100% of reach to gaze point actions and full implementation of pick and place tasks in 96%, and pick and pour tasks in 76% of cases. Finally we present a discussion on our results and what future work is needed to improve the system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset