Products
Tech Talk: From limited Hadoop compute capacity to increased data scientist efficiency
October 17, 2019
Using “zero-copy” hybrid bursting with Spark to solve capacity problems
Want to leverage your existing investments in Hadoop with your data on-premise and still benefit from the elasticity of the cloud?
Like other Hadoop users, you most likely experience very large and busy Hadoop clusters, particularly when it comes to compute capacity. Bursting HDFS data to the cloud can bring challenges – network latency impacts performance, copying data via DistCP means maintaining duplicate data, and you may have to make application changes to accomodate the use of S3.
“Zero-copy” hybrid bursting with Alluxio keeps your data on-prem and syncs data to compute in the cloud so you can expand compute capacity, particularly for ephemeral Spark jobs.
In this tech talk, we’ll discuss:
- Approaches to burst data to the cloud
- How Alluxio can enable “zero-copy” bursting of Spark workloads to cloud data services like EMR and Dataproc
- How DBS Bank uses Alluxio to solve for limited on-prem compute capacity by zero-copy bursting Spark workloads to AWS EMR
Using “zero-copy” hybrid bursting with Spark to solve capacity problems
Want to leverage your existing investments in Hadoop with your data on-premise and still benefit from the elasticity of the cloud?
Like other Hadoop users, you most likely experience very large and busy Hadoop clusters, particularly when it comes to compute capacity. Bursting HDFS data to the cloud can bring challenges – network latency impacts performance, copying data via DistCP means maintaining duplicate data, and you may have to make application changes to accomodate the use of S3.
“Zero-copy” hybrid bursting with Alluxio keeps your data on-prem and syncs data to compute in the cloud so you can expand compute capacity, particularly for ephemeral Spark jobs.
In this tech talk, we’ll discuss:
- Approaches to burst data to the cloud
- How Alluxio can enable “zero-copy” bursting of Spark workloads to cloud data services like EMR and Dataproc
- How DBS Bank uses Alluxio to solve for limited on-prem compute capacity by zero-copy bursting Spark workloads to AWS EMR
Using “zero-copy” hybrid bursting with Spark to solve capacity problems
Want to leverage your existing investments in Hadoop with your data on-premise and still benefit from the elasticity of the cloud?
Like other Hadoop users, you most likely experience very large and busy Hadoop clusters, particularly when it comes to compute capacity. Bursting HDFS data to the cloud can bring challenges – network latency impacts performance, copying data via DistCP means maintaining duplicate data, and you may have to make application changes to accomodate the use of S3.
“Zero-copy” hybrid bursting with Alluxio keeps your data on-prem and syncs data to compute in the cloud so you can expand compute capacity, particularly for ephemeral Spark jobs.
In this tech talk, we’ll discuss:
- Approaches to burst data to the cloud
- How Alluxio can enable “zero-copy” bursting of Spark workloads to cloud data services like EMR and Dataproc
- How DBS Bank uses Alluxio to solve for limited on-prem compute capacity by zero-copy bursting Spark workloads to AWS EMR
Videos:
Presentation Slides:
Complete the form below to access the full overview:
.png)
Videos
Bridging Speed and Scale: AWS S3 Data Caching for Low-Latency, Semantically-Rich AI Workloads

Amazon S3 and other cloud object stores have become the de facto storage system for organizations large and small. And it’s no wonder why. Cloud object stores deliver unprecedented flexibility with unlimited capacity that scales on demand and ensures data durability out-of-the-box at unbeatable prices.
Yet as workloads shift toward real-time AI, inference, feature stores, and agentic memory systems, S3’s latency and limited semantics begin to show their limits. In this webinar, you’ll learn how to augment — rather than replace — S3 with a tiered architecture that restores sub-millisecond performance, richer semantics, and high throughput — all while preserving S3’s advantages of low-cost capacity, durability, and operational simplicity.
We’ll walk through:
- The key challenges posed by latency-sensitive, semantically rich workloads (e.g. feature stores, RAG pipelines, write-ahead logs)
- Why “just upgrading storage” isn’t sufficient — the bottlenecks in metadata, object access latency, and write semantics
- How Alluxio transparently layers on top of S3 to provide ultra-low latency caching, append semantics, and zero data migration with both FSx-style POSIX access and S3 API access
- Real-world results: achieving sub-ms TTFB, 90%+ GPU utilization in ML training, 80X faster feature store query response times, and dramatic cost savings from reduced S3 operations
- Trade-offs, deployment patterns, and best practices for integrating this tiered approach in your AI/analytics stack
October 28, 2025
AI/ML Infra Meetup | AI at scale Architecting Scalable, Deployable and Resilient Infrastructure

Pratik Mishra delivered insights on architecting scalable, deployable, and resilient AI infrastructure at scale. His discussion on fault tolerance, checkpoint optimization, and the democratization of AI compute through AMD's open ecosystem resonated strongly with the challenges teams face in production ML deployments.
September 30, 2025
AI/ML Infra Meetup | Alluxio + S3 A Tiered Architecture for Latency-Critical, Semantically-Rich Workloads

In this talk, Bin Fan, VP of Technology at Alluxio, presents on building tiered architectures that bring sub-millisecond latency to S3-based workloads. The comparison showing Alluxio's 45x performance improvement over S3 Standard and 5x over S3 Express One Zone demonstrated the critical role the performance & caching layer plays in modern AI infrastructure.
September 30, 2025