If you’re a MapR user, you might have concerns with your existing data stack. Whether it’s the complexity of Hadoop, financial instability and no future MapR product roadmap, or no flexibility when it comes to co-locating storage and compute, MapR may no longer be working for you.
Alluxio can help you migrate to a modern, disaggregated data stack using any object store with the similar performance of Hadoop plus significant cost savings.
Join us for this tech talk where we’ll discuss how to separate your compute and storage on-prem and architect a new data stack that makes your object store the core. We’ll show you how to offload your MapR/HDFS compute to any object store and how to run all of your existing jobs as-is on Alluxio + object store.
If you’re a MapR user, you might have concerns with your existing data stack. Whether it’s the complexity of Hadoop, financial instability and no future MapR product roadmap, or no flexibility when it comes to co-locating storage and compute, MapR may no longer be working for you.
Alluxio can help you migrate to a modern, disaggregated data stack using any object store with the similar performance of Hadoop plus significant cost savings.
Join us for this tech talk where we’ll discuss how to separate your compute and storage on-prem and architect a new data stack that makes your object store the core. We’ll show you how to offload your MapR/HDFS compute to any object store and how to run all of your existing jobs as-is on Alluxio + object store.
Videos:
Presentation Slides:
Complete the form below to access the full overview:
.png)
Videos

Fireworks AI is a leading inference cloud provider for Generative AI, powering real-time inference and fine-tuning services for customers' applications that require minimal latency, high throughput, and high concurrency. Their GPU infrastructure spans 10+ clouds and 15+ regions, serving enterprises and developers deploying production AI workloads at scale.
With model sizes reaching 70GB+, Fireworks AI faced critical challenges: eliminating cold start delays, managing highly concurrent model downloads across GPU clusters, reducing tens of thousands in annual cloud egress costs, and automating manual pipeline management that consumed 4+ hours weekly. They chose Alluxio as their solution to scale with their hyper-growth without requiring dedicated infrastructure resources.
In this tech talk, Akram Bawayah, Software Engineer at Fireworks AI, and Bin Fan, VP of Technology at Alluxio, share how Fireworks AI uses Alluxio to power their multi-cloud inference infrastructure.
They discuss:
- How Fireworks AI uses Alluxio in its high-performance model distribution system to deliver fast, reliable inference across multiple clouds
- How implementing Alluxio distributed caching achieved 1TB/s+ model deployment throughput, reducing model loading from hours to minutes while significantly cutting cloud egress costs
- How to simplify infrastructure operations and seamlessly scale model distribution across multi-cloud GPU environments

