A Lower Bound for the Sample Complexity of Inverse Reinforcement Learning

03/07/2021
by   Abi Komanduru, et al.
0

Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP). This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem. A geometric construction of β-strict separable IRL problems using spherical codes is considered. Properties of the ensemble size as well as the Kullback-Leibler divergence between the generated trajectories are derived. The resulting ensemble is then used along with Fano's inequality to derive a sample complexity lower bound of O(n log n), where n is the number of states in the MDP.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset