Convergence rates for critical point regularization

02/17/2023
by   Daniel Obmann, et al.
0

Tikhonov regularization involves minimizing the combination of a data discrepancy term and a regularizing term, and is the standard approach for solving inverse problems. The use of non-convex regularizers, such as those defined by trained neural networks, has been shown to be effective in many cases. However, finding global minimizers in non-convex situations can be challenging, making existing theory inapplicable. A recent development in regularization theory relaxes this requirement by providing convergence based on critical points instead of strict minimizers. This paper investigates convergence rates for the regularization with critical points using Bregman distances. Furthermore, we show that when implementing near-minimization through an iterative algorithm, a finite number of iterations is sufficient without affecting convergence rates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset