A Two-Sided Matching Model for Data Stream Processing in the Cloud-Fog Continuum

05/17/2021
by   Narges Mehran, et al.
0

Latency-sensitive and bandwidth-intensive stream processing applications are dominant traffic generators over the Internet network. A stream consists of a continuous sequence of data elements, which require processing in nearly real-time. To improve communication latency and reduce the network congestion, Fog computing complements the Cloud services by moving the computation towards the edge of the network. Unfortunately, the heterogeneity of the new Cloud-Fog continuum raises important challenges related to deploying and executing data stream applications. We explore in this work a two-sided stable matching model called Cloud-Fog to data stream application matching (CODA) for deploying a distributed application represented as a workflow of stream processing microservices on heterogeneous Cloud-Fog computing resources. In CODA, the application microservices rank the continuum resources based on their microservice stream processing time, while resources rank the stream processing microservices based on their residual bandwidth. A stable many-to-one matching algorithm assigns microservices to resources based on their mutual preferences, aiming to optimize the complete stream processing time on the application side, and the total streaming traffic on the resource side. We evaluate the CODA algorithm using simulated and real-world Cloud-Fog scenarios. We achieved 11 to 45 compared to related state-of-the-art approaches.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset