Univariate Mean Change Point Detection: Penalization, CUSUM and Optimality
The problem of univariate mean change point detection and localization based on a sequence of n independent observations with piecewise constant means has been intensively studied for more than half century, and serves as a blueprint for change point problems in more complex settings. We provide a complete characterization of this classical problem in a general framework in which the upper bound on the noise variance σ^2, the minimal spacing Δ between two consecutive change points and the minimal magnitude of the changes κ, are allowed to vary with n. We first show that consistent localization of the change points when the signal-to-noise ratio κ√(Δ)/σ is uniformly bounded from above is impossible. In contrast, when κ√(Δ)/σ is diverging in n at any arbitrary slow rate, we demonstrate that two computationally-efficient change point estimators, one based on the solution to an ℓ_0-penalized least squares problem and the other on the popular WBS algorithm, are both consistent and achieve a localization rate of the order σ^2/κ^2(n). We further show that such rate is minimax optimal, up to a (n) term.
READ FULL TEXT