A Parameter Choice Rule for Tikhonov Regularization Based on Predictive Risk
In this work, we propose a new criterion for choosing the regularization parameter in Tikhonov regularization when the noise is white Gaussian. The criterion minimizes a lower bound of the predictive risk, when both data norm and noise variance are known, and the parameter choice involves minimizing a function whose solution depends only on the signal-to-noise ratio. Moreover, when neither noise variance nor data norm is given, we propose an iterative algorithm which alternates between a minimization step of finding the regularization parameter and an estimation step of estimating signal-to-noise ratio. Simulation studies on both small- and large-scale datasets suggest that the approach can provide very accurate and stable regularized inverse solutions and, for small sized samples, it outperforms discrepancy principle, balancing principle, unbiased predictive risk estimator, L-curve method generalized cross validation, and quasi-optimality criterion, and achieves excellent stability hitherto unavailable.
READ FULL TEXT