HetuMoE: An Efficient Trillion-scale Mixture-of-Expert Distributed Training System

03/28/2022
by   Xiaonan Nie, et al.
0

As giant dense models advance quality but require large-scale expensive GPU clusters for training, the sparsely gated Mixture-of-Experts (MoE), a kind of conditional computation architecture, are proposed to scale models while keeping the computation constant. Specifically, the input data is routed by a gate network and only activates a part of the expert network. Existing MoE training systems only support part of mainstream MoE models (e.g. Top k) training under expensive high-bandwidth GPU clusters. In this paper, we present HetuMoE, a high-performance large-scale sparse MoE training system built on Hetu. HetuMoE provides multiple gating strategies and efficient GPU kernel implementations. To further improve the training efficiency on commodity GPU clusters (e.g, with only 1 NiC), we introduce the hierarchical AllToAll communication that combines hierarchical networks and aggregating messages. Compared with existing state-of-the-art MoE systems, HetuMoE obtains at least 15 the switch gate with a batch size of 32. The code is available at: https://github.com/PKU-DAIR/Hetu.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset