Modeling Human Decision-making in Generalized Gaussian Multi-armed Bandits

07/23/2013
by   Paul Reverdy, et al.
0

We present a formal model of human decision-making in explore-exploit tasks using the context of multi-armed bandit problems, where the decision-maker must choose among multiple options with uncertain rewards. We address the standard multi-armed bandit problem, the multi-armed bandit problem with transition costs, and the multi-armed bandit problem on graphs. We focus on the case of Gaussian rewards in a setting where the decision-maker uses Bayesian inference to estimate the reward values. We model the decision-maker's prior knowledge with the Bayesian prior on the mean reward. We develop the upper credible limit (UCL) algorithm for the standard multi-armed bandit problem and show that this deterministic algorithm achieves logarithmic cumulative expected regret, which is optimal performance for uninformative priors. We show how good priors and good assumptions on the correlation structure among arms can greatly enhance decision-making performance, even over short time horizons. We extend to the stochastic UCL algorithm and draw several connections to human decision-making behavior. We present empirical data from human experiments and show that human performance is efficiently captured by the stochastic UCL algorithm with appropriate parameters. For the multi-armed bandit problem with transition costs and the multi-armed bandit problem on graphs, we generalize the UCL algorithm to the block UCL algorithm and the graphical block UCL algorithm, respectively. We show that these algorithms also achieve logarithmic cumulative expected regret and require a sub-logarithmic expected number of transitions among arms. We further illustrate the performance of these algorithms with numerical examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2019

Batched Multi-Armed Bandits with Optimal Regret

We present a simple and efficient algorithm for the batched stochastic m...
research
12/23/2015

Satisficing in multi-armed bandit problems

Satisficing is a relaxation of maximizing and allows for less risky deci...
research
08/11/2022

Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of Multi-armed Bandits

The multi-armed bandit (MAB) model is one of the most classical models t...
research
10/02/2015

A Survey of Online Experiment Design with the Stochastic Multi-Armed Bandit

Adaptive and sequential experiment design is a well-studied area in nume...
research
07/08/2019

Thompson Sampling on Symmetric α-Stable Bandits

Thompson Sampling provides an efficient technique to introduce prior kno...
research
07/05/2015

Correlated Multiarmed Bandit Problem: Bayesian Algorithms and Regret Analysis

We consider the correlated multiarmed bandit (MAB) problem in which the ...
research
04/03/2020

Hawkes Process Multi-armed Bandits for Disaster Search and Rescue

We propose a novel framework for integrating Hawkes processes with multi...

Please sign up or login with your details

Forgot password? Click here to reset