Towards Designing a Self-Managed Machine Learning Inference Serving System inPublic Cloud
We are witnessing an increasing trend towardsusing Machine Learning (ML) based prediction systems, span-ning across different application domains, including productrecommendation systems, personal assistant devices, facialrecognition, etc. These applications typically have diverserequirements in terms of accuracy and response latency, thathave a direct impact on the cost of deploying them in a publiccloud. Furthermore, the deployment cost also depends on thetype of resources being procured, which by themselves areheterogeneous in terms of provisioning latencies and billingcomplexity. Thus, it is strenuous for an inference servingsystem to choose from this confounding array of resourcetypes and model types to provide low-latency and cost-effectiveinferences. In this work we quantitatively characterize the cost,accuracy and latency implications of hosting ML inferenceson different public cloud resource offerings. In addition, wecomprehensively evaluate prior work which tries to achievecost-effective prediction-serving. Our evaluation shows that,prior work does not solve the problem from both dimensionsof model and resource heterogeneity. Hence, we argue that toaddress this problem, we need to holistically solve the issuesthat arise when trying to combine both model and resourceheterogeneity towards optimizing for application constraints.Towards this, we envision developing a self-managed inferenceserving system, which can optimize the application require-ments based on public cloud resource characteristics. In orderto solve this complex optimization problem, we explore the highlevel design of a reinforcement-learning based system that canefficiently adapt to the changing needs of the system at scale.
READ FULL TEXT