
Dear Alluxio Community,
Welcome to our March newsletter! We've packed this edition with valuable insights, upcoming events, and on-demand content to keep you at the forefront of AI and Analytics.
📖 Good Reads
Dive into these inspiring articles and updates from the Alluxio team and beyond:

Cache Me If You Can: Building a Lightning-Fast Analytics Cache at Terabyte Scale
In this Medium post, Suresh Kumar Veerapathiran and Anudeep Kumar from Uptycs detail their journey optimizing data pipelines to accelerate their AI-powered analytics. Uptycs replaced Redis with Alluxio after proving Alluxio’s distributed cache outperformed Redis and other solutions.

Alluxio Partners with vLLM Production Stack to Accelerate LLM Inference
The vLLM Production Stack is an open-source implementation of a cluster-wide full-stack vLLM serving system developed by LMCache Lab at the University of Chicago. This joint solution provides high-throughput, low-latency data access by extending multi-tiered KV Cache management beyond GPU and CPU to enable faster, more scalable, and cost-efficient AI deployments.

AiThority Interview with Haoyuan Li, Founder and CEO, Alluxio
In this quick interview from AiThority, Haoyuan Li, Founder and CEO of Alluxio, highlights the benefits of AI and big data workloads, innovative ways to boost AI and ML initiatives, and the importance of data infrastructure.
🗓️ Upcoming Events
Mark your calendar for our exciting upcoming events!

TUESDAY, APRIL 1 11:00am PT
Inside Deepseek 3FS: A Deep Dive into AI-Optimized Distributed Storage
Join our live webinar and dive into Deepseek 3FS, the newly released purpose-built file system for AI. We’ll break down how 3FS optimizes data access and speeds up AI workloads as well as the design tradeoffs made to maximize throughput for AI. Presented by Alluxio Staff Engineer Stephen Pu.
🎥 AI/ML Infra Meetup at Uber Seattle | Slides & Recordings Now Available
Couldn't attend the live AI/ML Infra Meetup in Seattle last week? We get you covered!
Missed a session or wanted to review your favorite talk? Check out these on-demand videos and slides now:
- AI/ML Infra Meetup: Building Production Platform for Large-Scale Recommendation Applications -- By Xu Ning
- AI/ML Infra Meetup: How Uber Optimizes LLM Training and Finetune -- By Chongxiao Cao
- AI/ML Infra Meetup: Optimizing ML Data Access with Alluxio: Preprocessing, Pretraining, & Inference at Scale -- By Bin Fan
- AI/ML Infra Meetup: Deployment, Discovery and Serving of LLMs at Uber Scale -- By Sean Po & Tse-Chi Wang
Thanks for reading. We're excited to see you at one of the events or for the next newsletter in April!