Communication-efficient Distributed Newton-like Optimization with Gradients and M-estimators

07/13/2022
by   Ziyan Yin, et al.
0

In modern data science, it is common that large-scale data are stored and processed parallelly across a great number of locations. For reasons including confidentiality concerns, only limited data information from each parallel center is eligible to be transferred. To solve these problems more efficiently, a group of communication-efficient methods are being actively developed. We propose two communication-efficient Newton-type algorithms, combining the M-estimator and the gradient collected from each data center. They are created by constructing two Fisher information estimators globally with those communication-efficient statistics. Enjoying a higher rate of convergence, this framework improves upon existing Newton-like methods. Moreover, we present two bias-adjusted one-step distributed estimators. When the square of the center-wise sample size is of a greater magnitude than the total number of centers, they are as efficient as the global M-estimator asymptotically. The advantages of our methods are illustrated by extensive theoretical and empirical evidences.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset