Designing a Micro-Benchmark Suite to Evaluate gRPC for TensorFlow: Early Experiences

04/03/2018
by   Rajarshi Biswas, et al.
0

Remote procedure call (RPC) is the backbone of many modern distributed systems. Google's gRPC is one of the most popular open source RPC frameworks available in the community. gRPC is the main communication engine for Google's Deep Learning framework TensorFlow. TensorFlow primarily uses gRPC for communicating tensors and administrative tasks among different processes. Tensor updates during the training phase are communication intensive and thus TensorFlow's performance is heavily dependent on the underlying network and the efficacy of the communication engine. Training deep learning models on TensorFlow can take significant time ranging from several minutes to several hours, even several days. Thus system researchers need to devote a lot of time to understand the impact of communication on the overall performance. Clearly, there is lack of benchmarks available for system researchers. Therefore, we propose TF-gRPC-Bench micro-benchmark suite that enables system researchers to quickly understand the impact of the underlying network and communication runtime on deep learning workloads. To achieve this, we first analyze the characteristics of TensorFlow workload over gRPC by training popular deep learning models. Then, we propose three micro-benchmarks that take account these workload characteristics. In addition, we comprehensively evaluate gRPC with TF-gRPC-Bench micro-benchmark suite on different clusters over Ethernet, IPoIB, and RDMA, and present the results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset