Products
Alluxio Product School Webinar – Distributed Caching for Generative AI: Optimizing LLM Data Pipeline
May 24, 2023
As the AI landscape rapidly evolves, the advancements in generative AI technologies, such as ChatGPT, are driving a need for robust data infrastructures tailored for large language model (LLM) training and inference in the cloud. To effectively leverage the breakthroughs in LLM, organizations must ensure low latency, high concurrency, and scalability in production environments.
In this Alluxio-hosted webinar, Shouwei presented on the design and implementation of a distributed caching system that addresses the I/O challenges of LLM training and inference. He explored the unique requirements of data access patterns and offer practical best practices for optimizing the data pipeline through distributed caching in the cloud. The session featured insights from real-world examples, such as Microsoft, Tencent, and Zhihu, as well as from the open-source community. Watch this recording to get a deeper understanding of how to harness scalable, efficient, and robust data infrastructures for LLM training and inference.
As the AI landscape rapidly evolves, the advancements in generative AI technologies, such as ChatGPT, are driving a need for robust data infrastructures tailored for large language model (LLM) training and inference in the cloud. To effectively leverage the breakthroughs in LLM, organizations must ensure low latency, high concurrency, and scalability in production environments.
In this Alluxio-hosted webinar, Shouwei presented on the design and implementation of a distributed caching system that addresses the I/O challenges of LLM training and inference. He explored the unique requirements of data access patterns and offer practical best practices for optimizing the data pipeline through distributed caching in the cloud. The session featured insights from real-world examples, such as Microsoft, Tencent, and Zhihu, as well as from the open-source community. Watch this recording to get a deeper understanding of how to harness scalable, efficient, and robust data infrastructures for LLM training and inference.
Video:
Presentation slides:
Videos:
Presentation Slides:
Complete the form below to access the full overview:
.png)
Videos
AI/ML Infra Meetup | Bringing Data to GPUs Anywhere + Get Low-Latency on Object Store with Alluxio

In this talk, Bin Fan, VP of Technology at Alluxio, explores how to enable efficient data access across distributed GPU infrastructure, achieving low-latency performance for feature stores and RAG workloads.
November 13, 2025
AI/ML Infra Meetup | SkyPilot: Open-source System to Scale AI across Clusters, Hyperscalers, and Neoclouds

Hear from Zongheng Yang, Co-Creator of SkyPilot, as he explores how to simplify AI deployment across clouds and on-premises infrastructure with automated resource provisioning and cost optimization.
November 13, 2025
Bridging Speed and Scale: AWS S3 Data Caching for Low-Latency, Semantically-Rich AI Workloads

Amazon S3 and other cloud object stores have become the de facto storage system for organizations large and small. And it’s no wonder why. Cloud object stores deliver unprecedented flexibility with unlimited capacity that scales on demand and ensures data durability out-of-the-box at unbeatable prices.
Yet as workloads shift toward real-time AI, inference, feature stores, and agentic memory systems, S3’s latency and limited semantics begin to show their limits. In this webinar, you’ll learn how to augment — rather than replace — S3 with a tiered architecture that restores sub-millisecond performance, richer semantics, and high throughput — all while preserving S3’s advantages of low-cost capacity, durability, and operational simplicity.
We’ll walk through:
- The key challenges posed by latency-sensitive, semantically rich workloads (e.g. feature stores, RAG pipelines, write-ahead logs)
- Why “just upgrading storage” isn’t sufficient — the bottlenecks in metadata, object access latency, and write semantics
- How Alluxio transparently layers on top of S3 to provide ultra-low latency caching, append semantics, and zero data migration with both FSx-style POSIX access and S3 API access
- Real-world results: achieving sub-ms TTFB, 90%+ GPU utilization in ML training, 80X faster feature store query response times, and dramatic cost savings from reduced S3 operations
- Trade-offs, deployment patterns, and best practices for integrating this tiered approach in your AI/analytics stack
October 28, 2025