Blog

Alluxio's strong Q2 featured Enterprise AI 3.7 launch with sub-millisecond latency (45× faster than S3 Standard), 50%+ customer growth including Salesforce and Geely, and MLPerf Storage v2.0 results showing 99%+ GPU utilization, positioning the company as a leader in maximizing AI infrastructure ROI.

In this blog, Greg Lindstrom, Vice President of ML Trading at Blackout Power Trading, an electricity trading firm in North American power markets, shares how they leverage Alluxio to power their offline feature store. This approach delivers multi-join query performance in the double-digit millisecond range, while maintaining the cost and durability benefits of Amazon S3 for persistent storage. As a result, they achieved a 22 to 37x reduction in large-join query latency for training and a 37 to 83x reduction in large-join query latency for inference.
.png)
.jpeg)
This article was initially posted on Solutions Review.
Artificial Intelligence (AI) has consistently been in the limelight as the precursor of the next technological era. Its limitless applications, ranging from simple chatbots to intricate neural networks capable of deep learning, promise a future where machines understand and replicate complex human processes. Yet, at the heart of this technological marvel is something foundational yet often overlooked: data.
.jpeg)
This article was initially posted on ITOpsTimes.
Unless you’ve been living off the grid, the hype around Generative AI has been impossible to ignore. A critical component fueling this AI revolution is the underlying computing power, GPUs. The lightning-fast GPUs enable speedy model training. But a hidden bottleneck can severely limit their potential – I/O. If data can’t make its way to the GPU fast enough to keep up with its computations, those precious GPU cycles end up wasted waiting around for something to do. This is why we need to bring more awareness to the challenges of I/O bottlenecks.