Distributed Training and Optimization Of Neural Networks

12/03/2020
by   Jean-Roch Vlimant, et al.
0

Deep learning models are yielding increasingly better performances thanks to multiple factors. To be successful, model may have large number of parameters or complex architectures and be trained on large dataset. This leads to large requirements on computing resource and turn around time, even more so when hyper-parameter optimization is done (e.g search over model architectures). While this is a challenge that goes beyond particle physics, we review the various ways to do the necessary computations in parallel, and put it in the context of high energy physics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset