On-Demand Videos

Unlock the full performance of your AI/ML infrastructure on Oracle Cloud Infrastructure (OCI).
Join Oracle's Master Principal Cloud Architect Xinghong He and Alluxio's VP of Technology Bin Fan for an in-depth technical session exploring how modern tiered caching, optimized storage integration, and smart deployment choices can deliver sub-millisecond latency and up to 5× faster data access on OCI — at scale.
You'll learn about:
- Architectural insights: How Alluxio’s tiered caching architecture works with OCI Object Storage and BM.DenseIO compute instances to eliminate data access bottlenecks.
- Benchmark-proven results: See real MLPerf Storage 2.0 and Warp benchmark outcomes demonstrating sub-millisecond latency and dramatic throughput gains.
- Deployment strategies: Compare deployment options — dedicated mode for peak performance vs. co-located mode for cost-efficient scale.
- Practical, actionable guidance: Implementation best practices you can apply directly to your AI/ML workloads on OCI.

Fireworks AI is a leading inference cloud provider for Generative AI, powering real-time inference and fine-tuning services for customers' applications that require minimal latency, high throughput, and high concurrency. Their GPU infrastructure spans 10+ clouds and 15+ regions, serving enterprises and developers deploying production AI workloads at scale.
With model sizes reaching 70GB+, Fireworks AI faced critical challenges: eliminating cold start delays, managing highly concurrent model downloads across GPU clusters, reducing tens of thousands in annual cloud egress costs, and automating manual pipeline management that consumed 4+ hours weekly. They chose Alluxio as their solution to scale with their hyper-growth without requiring dedicated infrastructure resources.
In this tech talk, Akram Bawayah, Software Engineer at Fireworks AI, and Bin Fan, VP of Technology at Alluxio, share how Fireworks AI uses Alluxio to power their multi-cloud inference infrastructure.
They discuss:
- How Fireworks AI uses Alluxio in its high-performance model distribution system to deliver fast, reliable inference across multiple clouds
- How implementing Alluxio distributed caching achieved 1TB/s+ model deployment throughput, reducing model loading from hours to minutes while significantly cutting cloud egress costs
- How to simplify infrastructure operations and seamlessly scale model distribution across multi-cloud GPU environments

In this talk, Eric Wang, Senior Staff Software Engineer introduces Uber’s open-source generative end-to-end ML lifecycle management platform: Michelangelo.
.png)
ALLUXIO DAY III 2021
April 27, 2021
RAPIDS is a set of open source libraries enabling GPU aware scheduling and memory representation for analytics and AI. Spark 3.0 uses RAPIDS for GPU computing to accelerate various jobs including SQL and DataFrame. With compute acceleration from massive parallelism on GPUs, there is a need for accelerating data access and this is what Alluxio enables for compute in any cloud. In this talk, you will learn how to use Alluxio and Spark with RAPIDS Accelerator on NVIDIA GPUs without any application changes.
ALLUXIO COMMUNITY OFFICE HOUR
We are thrilled to announce the release of Alluxio 2.5!
Alluxio 2.5 focuses on improving interface support to broaden the set of data driven applications which can benefit from data orchestration. The POSIX and S3 client interfaces have greatly improved in performance and functionality as a result of the widespread usage and demand from AI/ML workloads and system administration needs. Alluxio is rapidly evolving to meet the needs of enterprises that are deploying it as a key component of their AI/ML stacks.
At the same time, Alluxio continues to integrate with the latest cloud and cluster orchestration technologies. In 2.5, Alluxio has new connectors for Google Cloud Storage and Azure Data Lake Storage Gen 2 as well as better operability functionality for Kubernetes environments.
In this Office Hour, we will go over:
- JNI Based POSIX API
- S3 Northbound API
- ADLS Gen 2 Connector
- GCSv2 Connector
Many companies we talk to have on premises data lakes and use the cloud(s) to burst compute. Many are now establishing new object data lakes as well. As a result, running analytics such as Hive, Spark, Presto and machine learning are experiencing sluggish response times with data and compute in multiple locations. We also know there is an immense and growing data management burden to support these workflows.
In this talk, we will walk through what Alluxio’s Data Orchestration for the hybrid cloud era is and how it solves the performance and data management challenges we see.
In this tech talk, we’ll go over:
- What is Alluxio Data Orchestration?
- How does it work?
- Alluxio customer results
Alluxio is an open source Data orchestration platform that can be deployed on multiple platforms. However, it can require a lot of thinking and experience to integrate Alluxio into an existing Data Architecture adhering to minimally required DevOps principles meeting Organizational standards.
The presentation talks about the best practices to set up and techniques to build a cluster with open source Alluxio on AWS EKS, for one of our clients, which made it Scalable, Reliable, and Secure by adapting to Kubernetes RBAC.
Our speaker Vasista Polali will show you how to :
- Bootstrap EKS cluster in AWS with Terraform.
- Deploy open source Alluxio in a Namespace with persistence in AWS EFS.
- Scale up and down the Alluxio worker nodes as Daemon sets by Scaling the EKS nodes with Terraform.
- Accessing data with S3 mount.
- Controlling the access to Alluxio with Kubernetes port-forwarding, “setfacl” functionality, and Kubernetes service accounts.
- Re-using the data/metadata in the persistence layer on a new cluster.
ALLUXIO DAY 2021
March 11, 2021
ALLUXIO DAY 2021
March 9, 2021
ALLUXIO DAY 2021
March 9, 2021
Nowadays, cloud native environments have attracted lots of data-intensive applications deployed and ran on them, due to the efficient-to-deploy and easy-to-maintain advantages provided by cloud native platforms and frameworks such as Docker, Kubernetes. However, cloud native frameworks does not provide the data abstraction support to the applications natively. Therefore, we build Fluid project, which co-orchestrate data and containers together. We use Alluxio as the cache runtime inside Fluid to warm up hot data. In this report, we will introduce the design and effects of the Fluid project.
ALLUXIO DAY 2021
March 9, 2021
ALLUXIO DAY 2021
March 11, 2021
ALLUXIO DAY 2021
March 11, 2021
ALLUXIO DAY 2021
January 19, 2021
ALLUXIO DAY 2021
January 19, 2021