SEIFER: Scalable Edge Inference for Deep Neural Networks

10/21/2022
by   Arjun Parthasarathy, et al.
0

Edge inference is becoming ever prevalent through its applications from retail to wearable technology. Clusters of networked resource-constrained edge devices are becoming common, yet there is no production-ready orchestration system for deploying deep learning models over such edge networks which adopts the robustness and scalability of the cloud. We present SEIFER, a framework utilizing a standalone Kubernetes cluster to partition a given DNN and place these partitions in a distributed manner across an edge network, with the goal of maximizing inference throughput. The system is node fault-tolerant and automatically updates deployments based on updates to the model's version. We provide a preliminary evaluation of a partitioning and placement algorithm that works within this framework, and show that we can improve the inference pipeline throughput by 200 resource-constrained nodes. We have implemented SEIFER in open-source software that is publicly available to the research community.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset