Personalized Dynamics Models for Adaptive Assistive Navigation Interfaces
We explore the role of personalization for assistive navigational systems (e.g., service robot, wearable system or smartphone app) that guide visually impaired users through speech, sound and haptic-based instructional guidance. Based on our analysis of real-world users, we show that the dynamics of blind users cannot be accounted for by a single universal model but instead must be learned on an individual basis. To learn personalized instructional interfaces, we propose PING (Personalized INstruction Generation agent), a model-based reinforcement learning framework which aims to quickly adapt its state transition dynamics model to match the reactions of the user using a novel end-to-end learned weighted majority-based regression algorithm. In our experiments, we show that PING learns dynamics models significantly faster compared to baseline transfer learning approaches on real-world data. We find that through better reasoning over personal mobility nuances, interaction with surrounding obstacles, and the current navigation task, PING is able to improve the performance of instructional assistive navigation at the most crucial junctions such as turns or veering paths. To enable sufficient planning time over user responses, we emphasize prediction of human motion for long horizons. Specifically, the learned dynamics models are shown to consistently improve long-term position prediction by over 1 meter on average (nearly the width of a hallway) compared to baseline approaches even when considering a prediction horizon of 20 seconds into the future.
READ FULL TEXT