Sharp Analysis of Smoothed Bellman Error Embedding
The Smoothed Bellman Error Embedding algorithm <cit.>, known as SBEED, was proposed as a provably convergent reinforcement learning algorithm with general nonlinear function approximation. It has been successfully implemented with neural networks and achieved strong empirical results. In this work, we study the theoretical behavior of SBEED in batch-mode reinforcement learning. We prove a near-optimal performance guarantee that depends on the representation power of the used function classes and a tight notion of the distribution shift. Our results improve upon prior guarantees for SBEED in <cit.> in terms of the dependence on the planning horizon and on the sample size. Our analysis builds on the recent work of <cit.> which studies a related algorithm MSBO, that could be interpreted as a non-smooth counterpart of SBEED.
READ FULL TEXT