Spark is a widely adopted open source framework that provides a unified interface for analytics and machine learning workloads. Alluxio, originating from the UC Berkeley AMPLab – the same lab as Spark, is an open source data orchestration platform that empowers compute frameworks like Spark by providing stateful caching to enable efficient data sharing between multiple jobs and improving resilience against job failures as well as bringing data together from many different sources, be it remote HDFS or cloud object stores.
Alluxio partnered with IBM to deliver a Spark-based solution to provide fast data analytics. With the integration of IBM Spectrum Conductor, an advanced workload and resource management platform that maximizes hardware utilization to speed results and cut infrastructure costs, Alluxio and IBM delivered a solution that powers leading telecom company’s applications to support 320 million subscribers. In this online meetup, we will present the benefits of the fast analytics stack of Spark on Alluxio and IBM and dive into a leading telecom’s use case of leveraging Spark and Alluxio to process massive amounts of mobile data.
In this online meetup, you will learn about:
- Why the leading companies are moving towards a decoupled compute and storage architecture, and the associated challenges and requirements.
- Why Spark and Alluxio together can solve the challenges and fulfill the requirements
- How leading telecom leverages Spark with Alluxio for fast data processing at scale on top of object store and HDFS
Spark is a widely adopted open source framework that provides a unified interface for analytics and machine learning workloads. Alluxio, originating from the UC Berkeley AMPLab – the same lab as Spark, is an open source data orchestration platform that empowers compute frameworks like Spark by providing stateful caching to enable efficient data sharing between multiple jobs and improving resilience against job failures as well as bringing data together from many different sources, be it remote HDFS or cloud object stores.
Alluxio partnered with IBM to deliver a Spark-based solution to provide fast data analytics. With the integration of IBM Spectrum Conductor, an advanced workload and resource management platform that maximizes hardware utilization to speed results and cut infrastructure costs, Alluxio and IBM delivered a solution that powers leading telecom company’s applications to support 320 million subscribers. In this online meetup, we will present the benefits of the fast analytics stack of Spark on Alluxio and IBM and dive into a leading telecom’s use case of leveraging Spark and Alluxio to process massive amounts of mobile data.
In this online meetup, you will learn about:
- Why the leading companies are moving towards a decoupled compute and storage architecture, and the associated challenges and requirements.
- Why Spark and Alluxio together can solve the challenges and fulfill the requirements
- How leading telecom leverages Spark with Alluxio for fast data processing at scale on top of object store and HDFS
Video:
Presentation slides:
Videos:
Presentation Slides:
Complete the form below to access the full overview:
.png)
Videos

Fireworks AI is a leading inference cloud provider for Generative AI, powering real-time inference and fine-tuning services for customers' applications that require minimal latency, high throughput, and high concurrency. Their GPU infrastructure spans 10+ clouds and 15+ regions, serving enterprises and developers deploying production AI workloads at scale.
With model sizes reaching 70GB+, Fireworks AI faced critical challenges: eliminating cold start delays, managing highly concurrent model downloads across GPU clusters, reducing tens of thousands in annual cloud egress costs, and automating manual pipeline management that consumed 4+ hours weekly. They chose Alluxio as their solution to scale with their hyper-growth without requiring dedicated infrastructure resources.
In this tech talk, Akram Bawayah, Software Engineer at Fireworks AI, and Bin Fan, VP of Technology at Alluxio, share how Fireworks AI uses Alluxio to power their multi-cloud inference infrastructure.
They discuss:
- How Fireworks AI uses Alluxio in its high-performance model distribution system to deliver fast, reliable inference across multiple clouds
- How implementing Alluxio distributed caching achieved 1TB/s+ model deployment throughput, reducing model loading from hours to minutes while significantly cutting cloud egress costs
- How to simplify infrastructure operations and seamlessly scale model distribution across multi-cloud GPU environments

