Products
New York Meetup Recap: September 2018
September 18, 2018
On September 13th, we held our firstNew York City Alluxio Meetup!Work-Benchwas very generous for hosting the Alluxio meetup in Manhattan. This was the first US Alluxio meetup outside of the Bay Area, so it was extremely exciting to get to meet Alluxio enthusiasts on the east coast! The meetup focused on users of Alluxio with different applications from Hive and Presto. As an introduction, Haoyuan Li (creator and founder of Alluxio) and Bin Fan (founding engineer of Alluxio) gave an overview of Alluxio and the new features and enhancements of the new v1.8.0 release. Next, Tao Huang and Bing Bai fromJD.com, one of the largest e-commerce companies in China, shared how they have been running Presto and Alluxio in production for almost a year. Their big data platform is running Alluxio on over 100 machines, and can achieve speed ups of over 10x. They also discussed their open source contributions to the Alluxio community and their plans for future work. Thai Bui fromBazaarvoice, a digital marketing company in Texas, presented how they effectively cache S3 data with Alluxio for Hive queries. By using Alluxio to serve their S3 data, they experienced 5x-10x speedups in their Hive queries. The talk slides are online:
- Alluxio: An overview and what's new in 1.8 (Haoyuan Li, Bin Fan)
- Using Alluxio as a fault-tolerant pluggable optimization component of JD.com's compute frameworks (Tao Huang and Bing Bai)
- Hybrid collaborative tiered-storage with Alluxio (Thai Bui)
We had a great time learning more about Alluxio use cases, and interacting with Alluxio users on the east coast! We look forward to the next chance to hold another NYC Alluxio meetup!
.png)
Blog

Make Multi-GPU Cloud AI a Reality
If you’re building large-scale AI, you’re already multi-cloud by choice (to avoid lock-in) or by necessity (to access scarce GPU capacity). Teams frequently chase capacity bursts, “we need 1,000 GPUs for eight weeks,” across whichever regions or providers can deliver. What slows you down isn’t GPUs, it’s data. Simply accessing the data needed to train, deploy, and serve AI models at the speed and scale required – wherever AI workloads and GPUs are deployed – is in fact not simple at all. In this article, learn how Alluxio brings Simplicity, Speed, and Scale to Multi-GPU Cloud deployments.

Alluxio's Strong Q2: Sub-Millisecond AI Latency, 50%+ Customer Growth, and Industry-Leading MLPerf Results
Alluxio's strong Q2 featured Enterprise AI 3.7 launch with sub-millisecond latency (45× faster than S3 Standard), 50%+ customer growth including Salesforce and Geely, and MLPerf Storage v2.0 results showing 99%+ GPU utilization, positioning the company as a leader in maximizing AI infrastructure ROI.