Delayed Feedback in Kernel Bandits
Black box optimisation of an unknown function from expensive and noisy evaluations is a ubiquitous problem in machine learning, academic research and industrial production. An abstraction of the problem can be formulated as a kernel based bandit problem (also known as Bayesian optimisation), where a learner aims at optimising a kernelized function through sequential noisy observations. The existing work predominantly assumes feedback is immediately available; an assumption which fails in many real world situations, including recommendation systems, clinical trials and hyperparameter tuning. We consider a kernel bandit problem under stochastically delayed feedback, and propose an algorithm with ðŠĖ(â(Î_k(T)T)+ðž[Ï]) regret, where T is the number of time steps, Î_k(T) is the maximum information gain of the kernel with T observations, and Ï is the delay random variable. This represents a significant improvement over the state of the art regret bound of ðŠĖ(Î_k(T)â(T)+ðž[Ï]Î_k(T)) reported in Verma et al. (2022). In particular, for very non-smooth kernels, the information gain grows almost linearly in time, trivializing the existing results. We also validate our theoretical results with simulations.
READ FULL TEXT