Finding Fast Transformers: One-Shot Neural Architecture Search by Component Composition

08/15/2020
by   Henry Tsai, et al.
14

Transformer-based models have achieved stateof-the-art results in many tasks in natural language processing. However, such models are usually slow at inference time, making deployment difficult. In this paper, we develop an efficient algorithm to search for fast models while maintaining model quality. We describe a novel approach to decompose the Transformer architecture into smaller components, and propose a sampling-based one-shot architecture search method to find an optimal model for inference. The model search process is more efficient than alternatives, adding only a small overhead to training time. By applying our methods to BERT-base architectures, we achieve 10 for pre-trained BERT and 70 distilled BERT model on Cloud TPU-v2 with a generally acceptable drop in performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset