Asymptotic normality of a linear threshold estimator in fixed dimension with near-optimal rate
Linear thresholding models postulate that the conditional distribution of a response variable in terms of covariates differs on the two sides of a (typically unknown) hyperplane in the covariate space. A key goal in such models is to learn about this separating hyperplane. Exact likelihood or least square methods to estimate the thresholding parameter involve an indicator function which make them difficult to optimize and are, therefore, often tackled by using a surrogate loss that uses a smooth approximation to the indicator. In this note, we demonstrate that the resulting estimator is asymptotically normal with a near optimal rate of convergence: n^-1 up to a log factor, in a classification thresholding model. This is substantially faster than the currently established convergence rates of smoothed estimators for similar models in the statistics and econometrics literatures.
READ FULL TEXT