Amos: An Adam-style Optimizer with Adaptive Weight Decay towards Model-Oriented Scale
We present Amos, a stochastic gradient-based optimizer designed for training deep neural networks. It can be viewed as an Adam optimizer with theoretically supported, adaptive learning-rate decay and weight decay. A key insight behind Amos is that it leverages model-specific information to determine the initial learning-rate and decaying schedules. When used for pre-training BERT variants and T5, Amos consistently converges faster than the state-of-the-art settings of AdamW, achieving better validation loss within <=70 time, while requiring <=51 at: https://github.com/google-research/jestimator
READ FULL TEXT