FuncPipe: A Pipelined Serverless Framework for Fast and Cost-efficient Training of Deep Learning Models
Training deep learning (DL) models has become a norm. With the emergence of serverless computing and its benefits of true pay-as-you-go pricing and scalability, systems researchers have recently started to provide support for serverless-based training. However, the ability to train deep learning models on serverless platforms is hindered by the inherent limitations of today's serverless infrastructure and the explosive requirement on memory and bandwidth. For example, existing AWS serverless functions have up to 10GB memory and 70MB/s bandwidth while training an AmoebaNet-D model with batch size 64 can require 45GB memory and transfer 900MB data per iteration. The appalling resource mismatch between serverless functions and DL models has stalled the progress on serverless-based training. In this paper, we present FuncPipe, the first pipelined serverless framework that enables fast and low-cost training of DL models. FuncPipe is designed with the key insight that model partitioning can be leveraged to bridge both memory and bandwidth gaps between the capacity of serverless functions and the requirement of DL training. Conceptually simple, we have to answer a number of design questions including how to partition the model, how to configure each serverless function, and how to exploit each function's uplink/downlink bandwidth. We co-optimize model partition and resource allocation with a Mixed-Integer Quadratic Programming formulation and redesign storage-based communication for efficient bandwidth usage. We implement FuncPipe on AWS and AliCloud and show that it achieves up to 77 comparing to state-of-the-art serverless-based frameworks.
READ FULL TEXT