XPipe: Efficient Pipeline Model Parallelism for Multi-GPU DNN Training
We propose XPipe, an efficient asynchronous pipeline model parallelism approach for multi-GPU DNN training. XPipe is designed to make use of multiple GPUs to concurrently and continuously train different parts of a DNN model. To improve GPU utilization and achieve high throughput, it splits a mini-batch into a set of micro-batches and allows the overlapping of the pipelines of multiple micro-batches, including those belonging to different mini-batches. Most importantly, the weight prediction strategy adopted by XPipe enables it to effectively address the weight inconsistency and staleness issues incurred by the asynchronous pipeline parallelism. As a result, XPipe incorporates the advantages of both synchronous and asynchronous pipeline parallelism approaches. It can achieve high throughput while obtaining very comparable (even slightly better) model quality as the synchronous counterpart. Experimental results show that XPipe outperforms other existing synchronous and asynchronous model parallelism approaches.
READ FULL TEXT