Risk-Sensitive Variational Bayes: Formulations and Bounds

03/12/2019
by   Prateek Jaiswal, et al.
0

We study data-driven decision-making problems in a parametrized Bayesian framework. We adopt a risk-sensitive approach to modeling the interplay between statistical estimation of parameters and optimization, by computing a risk measure over a loss/disutility function with respect to the posterior distribution over the parameters. While this forms the standard Bayesian decision-theoretic approach, we focus on problems where calculating the posterior distribution is intractable, a typical situation in modern applications with heterogeneity due to observed covariates and latent group structure. The key methodological innovation we introduce in this paper is to leverage a dual representation of the risk measure to introduce an optimization-based framework for approximately computing the posterior risk-sensitive objective, as opposed to using standard sampling based methods such as Markov Chain Monte Carlo. Our analytical contributions include rigorously proving finite sample bounds on the `optimality gap' of optimizers obtained using the computational methods in this paper, from the `true' optimizers of a given decision-making problem. We illustrate our results by comparing the theoretical bounds with simulations of a newsvendor problem on two methods extracted from our computational framework.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro