Minimizing Event-Handling Latencies in Secure Virtual Machines
Virtualization, after having found widespread adoption in the server and desktop arena, is poised to change the architecture of embedded systems as well. The benefits afforded by virtualization - enhanced isolation, manageability, flexibility, and security - could be instrumental for developers of embedded systems as an answer to the rampant increase in complexity. While mature desktop and server solutions exist, they cannot be easily reused on embedded systems because of markedly different requirements. Unfortunately, optimizations aimed at throughput, important for servers, often compromise on aspects like predictable real-time behavior, which are crucial to many embedded systems. In a similar vein, the requirements for small trusted computing bases, lightweight inter-VM communication, and small footprints are often not accommodated. This observation suggests that virtual machines for embedded systems should be constructed from scratch with particular attention paid to the specific requirements. In this paper, we set out with a virtual machine designed for security-conscious workloads and describe the steps necessary to achieve good event-handling latencies. That evolution is possible because the underlying microkernel is well suited to satisfy real-time requirements. As the guest system we chose Linux with the PREEMPT_RT configuration, which itself was developed in an effort to bring down event-handling latencies in a general purpose system. Our results indicate that the increase of event-handling latencies of a guest running in a virtual machine does not, compared to native execution, exceed a factor of two.
READ FULL TEXT