Many resource management techniques for task scheduling, energy and carb...
Selecting the right resources for big data analytics jobs is hard becaus...
Federated Learning (FL) is a decentralized machine learning approach whe...
Stream processing has become a critical component in the architecture of...
To mitigate the growing carbon footprint of computing systems, there has...
Distributed dataflow systems such as Apache Spark or Apache Flink enable...
Federated Learning (FL) is an emerging machine learning technique that
e...
Embedded real-time devices for monitoring, controlling, and collaboratio...
Due to the complexity of modern IT services, failures can be manifold, o...
With increasingly more computation being shifted to the edge of the netw...
Scientific workflows consist of thousands of highly parallelized tasks
e...
Scientific workflows are designed as directed acyclic graphs (DAGs) and
...
Dynamic random access memory failures are a threat to the reliability of...
Choosing a good resource configuration for big data analytics applicatio...
Enabled by the increasing availability of sensor data monitored from
pro...
Selecting appropriate computational resources for data processing jobs o...
In Earth Systems Science, many complex data pipelines combine different ...
Use-cases in the Internet of Things (IoT) typically involve a high numbe...
Scientific workflows typically comprise a multitude of different process...
The growing research and industry interest in the Internet of Things and...
Distributed file systems are widely used nowadays, yet using their defau...
Artificial Intelligence for IT Operations (AIOps) describes the process ...
Distributed dataflow systems like Apache Spark and Apache Hadoop enable
...
Distributed Stream Processing systems have become an essential part of b...
Many organizations routinely analyze large datasets using systems for
di...
Many scientific workflow scheduling algorithms need to be informed about...
The growing electricity demand of cloud and edge computing increases
ope...
When IP-packet processing is unconditionally carried out on behalf of an...
The reliability of cloud platforms is of significant relevance because
s...
In the current IT world, developers write code while system operators ru...
The increasing use of Internet of Things devices coincides with more
com...
In highly distributed environments such as cloud, edge and fog computing...
Log data anomaly detection is a core component in the area of artificial...
Scientific workflow management systems like Nextflow support large-scale...
Anomaly detection is increasingly important to handle the amount of sens...
Edge and fog computing architectures utilize container technologies in o...
Anomaly detection becomes increasingly important for the dependability a...
Distributed Stream Processing systems are becoming an increasingly essen...
Distributed dataflow systems like Spark and Flink enable the use of clus...
Distributed dataflow systems enable the use of clusters for scalable dat...
Distributed dataflow systems enable data-parallel processing of large
da...
In this paper we introduce our vision of a Cognitive Computing Continuum...
Operation and maintenance of large distributed cloud applications can qu...
Anomalies or failures in large computer systems, such as the cloud, have...
Distributed data processing systems like MapReduce, Spark, and Flink are...
Edge computing was introduced as a technical enabler for the demanding
r...
Fault tolerance is a property which needs deeper consideration when deal...
The emergence of the Internet of Things has seen the introduction of num...
Embedded systems have been used to control physical environments for dec...
Rotating machines like engines, pumps, or turbines are ubiquitous in mod...