Products
Recap: Presto Summit SF 2019
July 1, 2019
What’s Presto Summit? It’s the leading Presto conference co-organized by our partner Starburst Data and the Presto Software Foundation.
Overview of the Summit
- Presto is among the fastest growing open source analytical query frameworks with production use cases across industries such as retail, telco, tech and more
- This was a full house event at Twitter HQ with more than 150 attendees
- Excellent keynote on the future of Presto delivered by Martin Traverso, Dain Sundstrom, and David Phillips – the co-creators of Presto and co-founders of the Presto Software Foundation (slides)
What We Learned
- Presto is high performance and has rich functionalities designed for interactive SQL queries
- Rise of cloud deployments of Presto both all cloud as well as hybrid cloud are growing. There is a need to simplify hybrid deployments which currently include making copies of data in the cloud and in some cases to HDFS and managing the Hadoop cluster.
- Big data workloads are increasingly looking for simpler compute orchestration and adopting Kubernetes. Starburst Presto announced a new Kubernetes operator to simplify deployment and scaling of clusters.
Reasons to try the Presto, Alluxio, and Any Storage Stack
- High query performance without operational overhead of data copying or ETL
- Query on data anywhere: hybrid, public cloud, on-premise
- Consistent and low latency
Learn more: Starburst Presto and Alluxio announce strategic OEM partnership | Presto with Alluxio | Download Alluxio
Additional resources:
- Community office hour (virtual): Building Fast SQL Analytics with Presto, Alluxio, and S3
- Got questions? Chat with Alluxio experts on Slack
.png)
Blog

Alluxio and Oracle Cloud Infrastructure: Delivering Sub-Millisecond Latency for AI Workloads
Oracle Cloud Infrastructure has published a technical solution blog demonstrating how Alluxio on Oracle Cloud Infrastructure (OCI) delivers exceptional performance for AI and machine learning workloads, achieving sub-millisecond average latency, near-linear scalability, and over 90% GPU utilization across 350 accelerators.

Make Multi-GPU Cloud AI a Reality
If you’re building large-scale AI, you’re already multi-cloud by choice (to avoid lock-in) or by necessity (to access scarce GPU capacity). Teams frequently chase capacity bursts, “we need 1,000 GPUs for eight weeks,” across whichever regions or providers can deliver. What slows you down isn’t GPUs, it’s data. Simply accessing the data needed to train, deploy, and serve AI models at the speed and scale required – wherever AI workloads and GPUs are deployed – is in fact not simple at all. In this article, learn how Alluxio brings Simplicity, Speed, and Scale to Multi-GPU Cloud deployments.
