ALLUXIO COMMUNITY OFFICE HOUR
Accessing data to run analytic workloads in Spark across data centers and/or clouds can be challenging. Additionally, network I/O can bottleneck Spark jobs that need to read a large amount of data. A common solution is to deploy an HDFS cluster closer to Spark as a caching layer and manually copy the input data to HDFS first, purging it afterward. But this ETL process can be both time-consuming and also error-prone.
A more efficient and simpler solution is to run Spark on Alluxio as a distributed cache on top of the remote data source. While caching data transparently based on access patterns and storing the working set closer, Alluxio provides Spark jobs much higher I/O throughput with enhanced data locality. In addition, Alluxio also provides data accessibility and abstraction for deployments in hybrid and multi-cloud environments.
In this Office Hour, we will go over how to:
- Burst on-prem Spark workloads to the cloud with Alluxio so Spark can seamlessly read from and write to remote data storage
- Use Alluxio as the input/output for Spark applications
- Save and load Spark RDDs and Dataframes with Alluxio
ALLUXIO COMMUNITY OFFICE HOUR
Accessing data to run analytic workloads in Spark across data centers and/or clouds can be challenging. Additionally, network I/O can bottleneck Spark jobs that need to read a large amount of data. A common solution is to deploy an HDFS cluster closer to Spark as a caching layer and manually copy the input data to HDFS first, purging it afterward. But this ETL process can be both time-consuming and also error-prone.
A more efficient and simpler solution is to run Spark on Alluxio as a distributed cache on top of the remote data source. While caching data transparently based on access patterns and storing the working set closer, Alluxio provides Spark jobs much higher I/O throughput with enhanced data locality. In addition, Alluxio also provides data accessibility and abstraction for deployments in hybrid and multi-cloud environments.
In this Office Hour, we will go over how to:
- Burst on-prem Spark workloads to the cloud with Alluxio so Spark can seamlessly read from and write to remote data storage
- Use Alluxio as the input/output for Spark applications
- Save and load Spark RDDs and Dataframes with Alluxio
Videos:
Presentation Slides:
Complete the form below to access the full overview:
.png)
Videos

Fireworks AI is a leading inference cloud provider for Generative AI, powering real-time inference and fine-tuning services for customers' applications that require minimal latency, high throughput, and high concurrency. Their GPU infrastructure spans 10+ clouds and 15+ regions, serving enterprises and developers deploying production AI workloads at scale.
With model sizes reaching 70GB+, Fireworks AI faced critical challenges: eliminating cold start delays, managing highly concurrent model downloads across GPU clusters, reducing tens of thousands in annual cloud egress costs, and automating manual pipeline management that consumed 4+ hours weekly. They chose Alluxio as their solution to scale with their hyper-growth without requiring dedicated infrastructure resources.
In this tech talk, Akram Bawayah, Software Engineer at Fireworks AI, and Bin Fan, VP of Technology at Alluxio, share how Fireworks AI uses Alluxio to power their multi-cloud inference infrastructure.
They discuss:
- How Fireworks AI uses Alluxio in its high-performance model distribution system to deliver fast, reliable inference across multiple clouds
- How implementing Alluxio distributed caching achieved 1TB/s+ model deployment throughput, reducing model loading from hours to minutes while significantly cutting cloud egress costs
- How to simplify infrastructure operations and seamlessly scale model distribution across multi-cloud GPU environments

