Memory is the key to fast big data processing. This has been realized by many, and frameworks such as Spark and Shark already leverage memory performance. As data sets continue to grow, storage is increasingly becoming a critical bottleneck in many workloads.
To address this need, we have developed Tachyon, a memory-centric fault-tolerant distributed storage system, which enables reliable file sharing at memory-speed across cluster frameworks such as Spark and MapReduce. The result of over three years of research and development, Tachyon achieves both memory-speed and fault tolerance.
Tachyon is Hadoop compatible. Existing Spark and MapReduce programs can run on top of it without any code changes. Tachyon is the default off-heap option in Spark, which means that RDDs can automatically be stored inside Tachyon to make Spark more resilient and avoid GC overheads. The project is open source and is already deployed at multiple companies. In addition, Tachyon has more than 80 contributors from over 30 institutions, including Yahoo, Tachyon Nexus, Redhat, Nokia, Intel, and Databricks. The project is the storage layer of the Berkeley Data Analytics Stack (BDAS) and also part of the Fedora distribution.
In this talk, we introduce Tachyon. We will present its architecture, performance evaluation, as well as several use cases we have seen in the real world.