Products
Blog

Make Multi-GPU Cloud AI a Reality
If you’re building large-scale AI, you’re already multi-cloud by choice (to avoid lock-in) or by necessity (to access scarce GPU capacity). Teams frequently chase capacity bursts, “we need 1,000 GPUs for eight weeks,” across whichever regions or providers can deliver. What slows you down isn’t GPUs, it’s data. Simply accessing the data needed to train, deploy, and serve AI models at the speed and scale required – wherever AI workloads and GPUs are deployed – is in fact not simple at all. In this article, learn how Alluxio brings Simplicity, Speed, and Scale to Multi-GPU Cloud deployments.

Alluxio's Strong Q2: Sub-Millisecond AI Latency, 50%+ Customer Growth, and Industry-Leading MLPerf Results
Alluxio's strong Q2 featured Enterprise AI 3.7 launch with sub-millisecond latency (45× faster than S3 Standard), 50%+ customer growth including Salesforce and Geely, and MLPerf Storage v2.0 results showing 99%+ GPU utilization, positioning the company as a leader in maximizing AI infrastructure ROI.
.png)
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Modernize your analytics workloads with NetApp and Alluxio
This blog was originally published on the website of NetApp: https://www.netapp.com/blog/modernize-analytics-workloads-netapp-alluxio/
Imagine as an IT leader having the flexibility to choose any services that are available in public cloud and on premises. And imagine being able to scale your storage for your data lakes with control over data locality and protection for your organization. With these goals in mind, NetApp and Alluxio are joining forces to help our customers adapt to new requirements for modernizing data architecture with low-touch operations for analytics, machine learning, and artificial intelligence workflows.
Hybrid Multi-Cloud
Data Platform Modernization
Large Scale Analytics Acceleration
.jpeg)
Designing the Presto Local Cache at Uber A collaboration between Uber and Alluxio part 2
In the previous blog, we introduced Uber’s Presto use cases and how we collaborated to implement Alluxio local cache to overcome different challenges in accelerating Presto queries. The second part discusses the improvements to the local cache metadata.
Large Scale Analytics Acceleration
.jpeg)
Speed Up Ubers Presto with Alluxio A collaboration between Uber and Alluxio part 1
This article shares how Uber and Alluxio collaborated to design and implement Presto local cache to reduce HDFS latency.
Hybrid Multi-Cloud
Large Scale Analytics Acceleration
.jpeg)
Deep Dive into the Implementation of Alluxio Metadata Storage
This article introduces the design and implementation of metadata storage in Alluxio Master, either on heap and off heap (based on RocksDB).
No items found.

Whats New in Alluxio 2.8: Enhanced S3 API Functionality Enterprise-grade Security and Data Migration With Better Usability and Low Cost
No items found.
.jpeg)
From Zookeeper to Raft: How Alluxio Stores File System State with High Availability and Fault Tolerance
Raft is an algorithm for state machine replication as a way to ensure high availability (HA) and fault tolerance. This blog shares how Alluxio has moved to a Zookeeper-less, built-in Raft-based journal system as a HA implementation.
Large Scale Analytics Acceleration
.jpeg)
Recommendations to Level Up Your Machine Learning Platform
With machine learning (ML) and artificial intelligence (AI) applications becoming more business-critical, organizations are in the race to advance their AI/ML capabilities. To realize the full potential of AI/ML, having the right underlying machine learning platform is a prerequisite.
Data Migration
GPU Acceleration
Model Training Acceleration
.jpeg)
Orchestrating Data for Machine Learning Pipelines
This article will discuss a new solution to orchestrating data for end-to-end machine learning pipelines that addresses the above questions. I will outline common challenges and pitfalls, followed by proposing a new technique, data orchestration, to optimize the data pipeline for machine learning.
GPU Acceleration
Model Training Acceleration
.jpeg)
From Cache to Cash: Introducing NFT for Data Orchestration
Today, we are excited to announce the launch of Non-fungible token (NFT) as a new feature in our leading data orchestration platform.
No items found.
.jpeg)
Improving Presto Architectural Decisions with Alluxio Shadow Cache at Meta Facebook
With the collaboration between Meta (Facebook), Princeton University, and Alluxio, we have developed "Shadow Cache" – a lightweight Alluxio component to track the working set size and infinite cache hit ratio. Shadow cache can keep track of the working set size over the past window dynamically and is implemented by a series of bloom filters. Shadow cache is deployed in Meta (Facebook) Presto and is being leveraged to understand the system bottleneck and help with routing design decisions.
Large Scale Analytics Acceleration

Accelerate Auto Data Tagging with Alluxio and Spark in Hybrid Cloud A Practice in WeRide
This blog shares the practice of using Alluxio and Spark to accelerate the auto data tagging system in WeRide, an autonomous driving technology company.
Hybrid Multi-Cloud
GPU Acceleration
Large Scale Analytics Acceleration
Model Training Acceleration
Your selections don't match any items.

.jpeg)