Probabilistically Robust PAC Learning
Recently, Robey et al. propose a notion of probabilistic robustness, which, at a high-level, requires a classifier to be robust to most but not all perturbations. They show that for certain hypothesis classes where proper learning under worst-case robustness is not possible, proper learning under probabilistic robustness is possible with sample complexity exponentially smaller than in the worst-case robustness setting. This motivates the question of whether proper learning under probabilistic robustness is always possible. In this paper, we show that this is not the case. We exhibit examples of hypothesis classes ℋ with finite VC dimension that are not probabilistically robustly PAC learnable with any proper learning rule. However, if we compare the output of the learner to the best hypothesis for a slightly stronger level of probabilistic robustness, we show that not only is proper learning always possible, but it is possible via empirical risk minimization.
READ FULL TEXT