Nonparametric Learning and Optimization with Covariates

05/03/2018
by   Ningyuan Chen, et al.
0

Modern decision analytics frequently involves the optimization of an objective over a finite horizon where the functional form of the objective is unknown. The decision analyst observes covariates and tries to learn and optimize the objective by experimenting with the decision variables. We present a nonparametric learning and optimization policy with covariates. The policy is based on adaptively splitting the covariate space into smaller bins (hyper-rectangles) and learning the optimal decision in each bin. We show that the algorithm achieves a regret of order O((T)^2 T^(2+d)/(4+d)), where T is the length of the horizon and d is the dimension of the covariates, and show that no policy can achieve a regret less than O(T^(2+d)/(4+d)) and thus demonstrate the near optimality of the proposed policy. The role of d in the regret is not seen in parametric learning problems: It highlights the complex interaction between the nonparametric formulation and the covariate dimension. It also suggests the decision analyst should incorporate contextual information selectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset