Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data
This study achieved bidirectional translation between descriptions and actions using small paired data. The ability to mutually generate descriptions and actions is essential for robots to collaborate with humans in their daily lives. The robot is required to associate real-world objects with linguistic expressions, and large-scale paired data are required for machine learning approaches. However, a paired dataset is expensive to construct and difficult to collect. This study proposes a two-stage training method for bidirectional translation. In the proposed method, we train recurrent autoencoders (RAEs) for descriptions and actions with a large amount of non-paired data. Then, we fine-tune the entire model to bind their intermediate representations using small paired data. Because the data used for pre-training do not require pairing, behavior-only data or a large language corpus can be used. We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions. The results showed that our method performed well, even when the amount of paired data to train was small. The visualization of the intermediate representations of each RAE showed that similar actions were encoded in a clustered position and the corresponding feature vectors well aligned.
READ FULL TEXT