Comparing Observation and Action Representations for Deep Reinforcement Learning in MicroRTS

10/26/2019
by   Shengyi Huang, et al.
0

This paper presents a preliminary study comparing different observation and action space representations for Deep Reinforcement Learning (DRL) in the context of Real-time Strategy (RTS) games. Specifically, we compare two representations: (1) a global representation where the observation represents the whole game state, and the RL agent needs to choose which unit to issue actions to, and which actions to execute; and (2) a local representation where the observation is represented from the point of view of an individual unit, and the RL agent picks actions for each unit independently. We evaluate these representations in MicroRTS showing that the local representation seems to outperform the global representation when training agents with the task of harvesting resources.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset