Efficient Stochastic Gradient Descent for Distributionally Robust Learning

05/22/2018
by   Soumyadip Ghosh, et al.
0

We consider a new stochastic gradient descent algorithm for efficiently solving general min-max optimization problems that arise naturally in distributionally robust learning. By focusing on the entire dataset, current approaches do not scale well. We address this issue by initially focusing on a subset of the data and progressively increasing this support to statistically cover the entire dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset