The hybrid cloud model, where cloud resources run Spark or Presto jobs against data stored on-premises, is an appealing solution to reduce resource contention in on-premise environments while also saving in overall costs. One key flaw in a hybrid model is the overhead associated with transferring data between the two environments. Data and metadata locality within the compute application must be achieved in order to maintain the similar performance of analytics jobs as if the entire workload was run on-premises.
In this office hour, we demonstrate how a “zero-copy burst” solution helps to speed up Spark and Presto queries in the public cloud while eliminating the process of manually copying and synchronizing data from the on-premise data lake to cloud storage. This approach allows compute frameworks to decouple from on-premise data sources and scale efficiently by leveraging Alluxio and public cloud resources such as AWS.
We will cover:
- Typical challenges of moving data to the cloud and expanding compute capacity.
- Details about “zero-copy” hybrid cloud solution for burst computing
- A demo of running Presto analytic queries using remote on-prem HDFS data with Alluxio deployed in AWS EMR
The hybrid cloud model, where cloud resources run Spark or Presto jobs against data stored on-premises, is an appealing solution to reduce resource contention in on-premise environments while also saving in overall costs. One key flaw in a hybrid model is the overhead associated with transferring data between the two environments. Data and metadata locality within the compute application must be achieved in order to maintain the similar performance of analytics jobs as if the entire workload was run on-premises.
In this office hour, we demonstrate how a “zero-copy burst” solution helps to speed up Spark and Presto queries in the public cloud while eliminating the process of manually copying and synchronizing data from the on-premise data lake to cloud storage. This approach allows compute frameworks to decouple from on-premise data sources and scale efficiently by leveraging Alluxio and public cloud resources such as AWS.
We will cover:
- Typical challenges of moving data to the cloud and expanding compute capacity.
- Details about “zero-copy” hybrid cloud solution for burst computing
- A demo of running Presto analytic queries using remote on-prem HDFS data with Alluxio deployed in AWS EMR
Videos:
Presentation Slides:
Complete the form below to access the full overview:
.png)
Videos

Fireworks AI is a leading inference cloud provider for Generative AI, powering real-time inference and fine-tuning services for customers' applications that require minimal latency, high throughput, and high concurrency. Their GPU infrastructure spans 10+ clouds and 15+ regions, serving enterprises and developers deploying production AI workloads at scale.
With model sizes reaching 70GB+, Fireworks AI faced critical challenges: eliminating cold start delays, managing highly concurrent model downloads across GPU clusters, reducing tens of thousands in annual cloud egress costs, and automating manual pipeline management that consumed 4+ hours weekly. They chose Alluxio as their solution to scale with their hyper-growth without requiring dedicated infrastructure resources.
In this tech talk, Akram Bawayah, Software Engineer at Fireworks AI, and Bin Fan, VP of Technology at Alluxio, share how Fireworks AI uses Alluxio to power their multi-cloud inference infrastructure.
They discuss:
- How Fireworks AI uses Alluxio in its high-performance model distribution system to deliver fast, reliable inference across multiple clouds
- How implementing Alluxio distributed caching achieved 1TB/s+ model deployment throughput, reducing model loading from hours to minutes while significantly cutting cloud egress costs
- How to simplify infrastructure operations and seamlessly scale model distribution across multi-cloud GPU environments

