A communication efficient distributed learning framework for smart environments
Due to the pervasive diffusion of personal mobile and IoT devices, many “smart environments” (e.g., smart cities and smart factories) will be, among others, generators of huge amounts of data. Currently, this is typically achieved through centralised cloud-based data analytics services. However, according to many studies, this approach may present significant issues from the standpoint of data ownership, and even wireless network capacity. One possibility to cope with these shortcomings is to move data analytics closer to where data is generated. In this paper, we tackle this issue by proposing and analyzing a distributed learning framework, whereby data analytics are performed at the edge of the network, i.e., on locations very close to where data is generated. Specifically, in our framework, partial data analytics are performed directly on the nodes that generate the data, or on nodes close by (e.g., some of the data generators can take this role on behalf of subsets of other nodes nearby). Then, nodes exchange partial models and refine them accordingly. Our framework is general enough to host different analytics services. In the specific case analysed in the paper, we focus on a learning task, considering two distributed learning algorithms. Using an activity recognition and a pattern recognition task, both on reference datasets, we compare the two learning algorithms between each other and with a central cloud solution (i.e., one that has access to the complete datasets). Our results show that using distributed machine learning techniques, it is possible to drastically reduce the network overhead, while obtaining performance comparable to the cloud solution in terms of learning accuracy. The analysis also shows when each distributed learning approach is preferable, based on the specific distribution of the data on the nodes.
READ FULL TEXT