torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models

04/21/2020
by   Chiheon Kim, et al.
0

We design and implement a ready-to-use library in PyTorch for performing micro-batch pipeline parallelism with checkpointing proposed by GPipe (Huang et al., 2019). In particular, we develop a set of design components to enable pipeline-parallel gradient computation in PyTorch's define-by-run and eager execution environment. We show that each component is necessary to fully benefit from pipeline parallelism in such environment, and demonstrate the efficiency of the library by applying it to various network architectures including AmoebaNet-D and U-Net. Our library is available at https://github.com/kakaobrain/torchgpipe .

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset