SAN MATEO, CA – August 19, 2024 - Alluxio, the AI and data acceleration platform, today announced its Founder and CEO Haoyuan (HY) Li has been named to the BigDATAwire People to Watch for 2024 published by Datanami, the leading publication covering AI, big data, and analytics news. This feature highlights key community members who are poised to drive the industry forward in the coming year.
For the past nine years,Datanamihas scoured the landscape every 12 months in search of exemplary individuals who have made a big difference in the wider big data community. And 2024 is no different. As it has for previous years, the 2024 People to Watch puts the spotlight on 12 individuals who have done extraordinary work in the areas of big data, advanced analytics, and AI. This year, Haoyuan joins other prominent recipients such as Vinoth Chandar, Founder & CEO of Onehouse; Renen Hallak, CEO & Founder of VAST Data; and Victor Peng, President of AMD.
According to Datanami, “From virtual file systems like Alluxio to embedded databases like DuckDB, the big data world never stays still. Behind these groundbreaking technologies are the people who develop them and make them work in the real world.This is the essence of the People to Watch program, and what inspires theDatanamieditorial team to highlight the great work of exceptional individuals every year.”
“I'm deeply honored to be recognized by Datanami as one of the People to Watch for 2024. This recognition is a testament to the hard work and dedication of the entire Alluxio team and the vibrant open-source community that drives innovation in AI infrastructure,” said Li.“As we look ahead, I’m excited to continue pushing the boundaries of what’s possible in AI and data, helping organizations unlock the full potential of their data.”
To read Datanami’s exclusive interview with Haoyuan Li, please visit: https://www.datanami.com/people-to-watch-2024-haoyuan-li/
About Alluxio
Alluxio is a leading provider of accelerated data access platforms for AI workloads. Alluxio’s distributed caching layer accelerates AI and data-intensive workloads by enabling high-speed data access across diverse storage systems. By creating a global namespace, Alluxio unifies data from multiple sources—on-premises and in the cloud—into a single, logical view, eliminating the need for data duplication or complex data movement.
Designed for scalability and performance, Alluxio brings data closer to compute frameworks like TensorFlow, PyTorch, and Spark, significantly reducing I/O bottlenecks and latency. Its intelligent caching, data locality optimization, and seamless integration with modern data platforms make it a powerful solution for teams building and scaling AI pipelines across hybrid and multi-cloud environments. Backed by leading investors, Alluxio powers technology, internet, financial services, and telecom companies, including 9 out of the top 10 internet companies globally. To learn more, visit www.alluxio.io.
Media Contact:
Beth Winkowski
Winkowski Public Relations, LLC for Alluxio
978-649-7189
beth@alluxio.com
.png)
News & Press
AMSTERDAM, NETHERLANDS, JUNE 10, 2025 — In today’s confusing and messy enterprise software market, innovative technology solutions that realize real customer results are hard to come by. As an industry analyst firm that focuses on enterprise digital transformation and the disruptive vendors that support it, Intellyx interacts with numerous innovators in the enterprise IT marketplace.
Alluxio, supplier of open source virtual distributed file systems, announced Alluxio Enterprise AI 3.6. This delivers capabilities for model distribution, model training checkpoint writing optimization, and enhanced multi-tenancy support. It can, we’re told, accelerate AI model deployment cycles, reduce training time, and ensure data access across cloud environments. The new release uses Alluxio Distributed Cache to accelerate model distribution workloads; by placing the cache in each region, model files need only be copied from the Model Repository to the Alluxio Distributed Cache once per region rather than once per server.