Orthogonal Projection in Linear Bandits
The expected reward in a linear stochastic bandit model is an unknown linear function of the chosen decision vector. In this paper, we consider the case where the expected reward is an unknown linear function of a projection of the decision vector onto a subspace. We call this the projection reward. Unlike the classical linear bandit problem, we assume that the projection reward is unobservable. Instead, the observed "reward" at each time step is the projection reward corrupted by another linear function of the decision vector projected onto a subspace orthogonal to the first. Such a model is useful in recommendation applications where the observed reward is corrupted by each individual's biases. In the case where there are finitely many decision vectors, we develop a strategy to achieve O( T) regret, where T is the number of time steps. In the case where the decision vector is chosen from an infinite compact set, our strategy achieves O(T^2/3(T)^1/2) regret. Simulations verify the efficiency of our strategy.
READ FULL TEXT