Homunculus: Auto-Generating Efficient Data-Plane ML Pipelines for Datacenter Networks
Support for Machine Learning (ML) applications in networks has significantly improved over the last decade. The availability of public datasets and programmable switching fabrics (including low-level languages to program them) present a full-stack to the programmer for deploying in-network ML. However, the diversity of tools involved, coupled with complex optimization tasks of ML model design and hyperparameter tuning while complying with the network constraints (like throughput and latency), put the onus on the network operator to be an expert in ML, network design, and programmable hardware. This multi-faceted nature of in-network tools and expertise in ML and hardware is a roadblock for ML to become mainstream in networks, today. We present Homunculus, a high-level framework that enables network operators to specify their ML requirements in a declarative, rather than imperative way. Homunculus takes as input, the training data and accompanying network constraints, and automatically generates and installs a suitable model onto the underlying switching hardware. It performs model design-space exploration, training, and platform code-generation as compiler stages, leaving network operators to focus on acquiring high-quality network data. Our evaluations on real-world ML applications show that Homunculus's generated models achieve up to 12 only 30 lines of single-script code on average. We further demonstrate the performance of the generated models on emerging per-packet ML platforms to showcase its timely and practical significance.
READ FULL TEXT