On Demand Video

Data Infra Meetup | FIFO Queues are All You Need for Cache Eviction

As a cache eviction algorithm, FIFO has a lot of attractive properties, such as simplicity, speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low efficiency (high miss ratio). In this talk, Juncheng Yangb describes a simple, scalable FIFO-based algorithm with three static queues (S3-FIFO). Evaluated on 6594 cache traces from 14 datasets, we show that S3- FIFO has lower miss ratios than state-of-the-art algorithms across traces. Moreover, S3-FIFO’s efficiency is robust — it has the lowest mean miss ratio on 10 of the 14 datasets. FIFO queues enable S3-FIFO to achieve good scalability with 6× higher throughput compared to optimized LRU at 16 threads. Our insight is that most objects in skewed workloads will only be accessed once in a short window, so it is critical to evict them early (also called quick demotion). The key of S3-FIFO is a small FIFO queue that filters out most objects from entering the main cache, which provides a guaranteed demotion speed and high demotion precision.


Juncheng Yang
(https://junchengyang.com) is a 5th-year Ph.D. student in the Computer Science Department at Carnegie Mellon University. His research interests broadly cover efficiency, performance, and sustainability of large-scale data systems, with a current focus on caching systems. Juncheng’s works have received best paper awards at NSDI’21, SOSP’21, and SYSTOR’16. His OSDI’20 paper was recognized as one of the best storage papers at the conference, and was invited to ACM TOS’22. One of the caching systems that he designed and built (Segcache, NSDI’21) has been adopted for production deployment at Twitter and Momento. Juncheng has received a Facebook Ph.D. Fellowship 2020-22, was recognized as a Rising Star in machine learning and systems in 2023, and a Google Cloud Research Innovator in 2023. He was an invited speaker at SNIA SDC 2020 and QConSF 2022.