A Closer Look at Invalid Action Masking in Policy Gradient Algorithms

06/25/2020
by   Shengyi Huang, et al.
8

In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games. Because these games have complicated rules, an action sampled from the full discrete action space will typically be invalid. The usual approach to deal with this problem in policy gradient algorithms is to "mask out" invalid actions and just sample from the set of valid actions. The implications of this process, however, remain under-investigated. In this paper, we show that the standard working mechanism of invalid action masking corresponds to valid policy gradient updates. More importantly, it works by applying a state-dependent differentiable function during the calculation of action probability distribution, which is a practice we do not find in any other DRL algorithms. Additionally, we show its critical importance to the performance of policy gradient algorithms. Specifically, our experiments show that invalid action masking scales well when the space of invalid actions is large, while the common approach of giving negative rewards for invalid actions will fail.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset