Timing Cache Accesses to Eliminate Side Channels in Shared Software
Timing side channels have been used to extract cryptographic keys and sensitive documents, even from trusted enclaves. In this paper, we focus on cache side channels created by access to shared code or data in the memory hierarchy. This vulnerability is exploited by several known attacks, e.g, evict+reload for recovering an RSA key and Spectre variants for data leaked due to speculative accesses. The key insight in this paper is the importance of the first access to the shared data after a victim brings the data into the cache. To eliminate the timing side channel, we ensure that the first access by a process to any cache line loaded by another process results in a miss. We accomplish this goal by using a combination of timestamps and a novel hardware design to allow efficient parallel comparisons of the timestamps. The solution works at all the cache levels and defends against an attacker process running on another core, same core, or another hyperthread. Our design retains the benefits of a shared cache: allowing processes to utilize the entire cache for their execution and retaining a single copy of shared code and data (data deduplication). Our implementation in the GEM5 simulator demonstrates that the system is able to defend against RSA key extraction. We evaluate performance using SPECCPU2006 and observe overhead due to first access delay to be 2.17 the security context bookkeeping is of the order of 0.3
READ FULL TEXT