On solutions of the distributional Bellman equation

01/31/2022
by   Julian Gerstenberg, et al.
0

In distributional reinforcement learning not only expected returns but the complete return distributions of a policy is taken into account. The return distribution for a fixed policy is given as the fixed point of an associated distributional Bellman operator. In this note we consider general distributional Bellman operators and study existence and uniqueness of its fixed points as well as their tail properties. We give necessary and sufficient conditions for existence and uniqueness of return distributions and identify cases of regular variation. We link distributional Bellman equations to multivariate distributional equations of the form X =_d AX + B, where X and B are d-dimensional random vectors, A a random d× d matrix and X and (A,B) are independent. We show that any fixed-point of a distributional Bellman operator can be obtained as the vector of marginal laws of a solution to such a multivariate distributional equation. This makes the general theory of such equations applicable to the distributional reinforcement learning setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset