Jensen-Shannon Information Based Characterization of the Generalization Error of Learning Algorithms

10/23/2020
by   Gholamali Aminian, et al.
49

Generalization error bounds are critical to understanding the performance of machine learning models. In this work, we propose a new information-theoretic based generalization error upper bound applicable to supervised learning scenarios. We show that our general bound can specialize in various previous bounds. We also show that our general bound can be specialized under some conditions to a new bound involving the Jensen-Shannon information between a random variable modelling the set of training samples and another random variable modelling the set of hypotheses. We also prove that our bound can be tighter than mutual information-based bounds under some conditions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset