SAN MATEO, CA – June 15, 2022 - Alluxio, the developer of the open source data orchestration platform for data driven workloads such as large-scale analytics and AI/ML, today announced it will present a session at the Linux Foundation’s Open Source Summit about strategies for building super-contributors in an open source community. The event is being held June 20 – 24 in Austin, TX and virtual.
Session Title: “Building Super-Contributors in Alluxio Open Source Community”
Session Time: Friday, June 24 at 2:50 pm – 3:20 pm CT
Session Presenters: Bin Fan, Founding Engineer & VP of Open Source, Alluxio; Jasmine Wang, Community Manager & DevRel, Alluxio
Session Details: The lack of community engagement is one of the most significant barriers to the survival of open source projects. The Alluxio open source community experimented with different approaches to nurture the community. In this talk, Bin Fan will share the story and findings of engaging the Alluxio community over the past six years. For example, rather than simply point-scoring and giving badges, introducing gamification turns very effective to understanding and influencing human behaviors. There is a delicate balance of triggers, ability, and motivation to find the “happy path” for contributors – the perfect amount of challenge and competition to keep them interested while preventing boredom. He will also discuss other pillars of community building (e.g., localization) and how to bring together the different pillars to build an everlasting and vibrant community. With innovative techniques, open source projects can create deeper engagements and turn ordinary community members into super-contributors.
To register for Open Source Summit, please go to the event’s registration page to purchase a registration.
TWEET THIS: @Alluxio to present at #OpenSourceSummit about building super-contributors in open source community https://bit.ly/3NR4KVk #OpenSource #Analytics #Cloud
About Alluxio
Alluxio is a leading provider of accelerated data access platforms for AI workloads. Alluxio’s distributed caching layer accelerates AI and data-intensive workloads by enabling high-speed data access across diverse storage systems. By creating a global namespace, Alluxio unifies data from multiple sources—on-premises and in the cloud—into a single, logical view, eliminating the need for data duplication or complex data movement.
Designed for scalability and performance, Alluxio brings data closer to compute frameworks like TensorFlow, PyTorch, and Spark, significantly reducing I/O bottlenecks and latency. Its intelligent caching, data locality optimization, and seamless integration with modern data platforms make it a powerful solution for teams building and scaling AI pipelines across hybrid and multi-cloud environments. Backed by leading investors, Alluxio powers technology, internet, financial services, and telecom companies, including 9 out of the top 10 internet companies globally. To learn more, visit www.alluxio.io.
Media Contact:
Beth Winkowski
Winkowski Public Relations, LLC for Alluxio
978-649-7189
beth@alluxio.com
.png)
News & Press
AMSTERDAM, NETHERLANDS, JUNE 10, 2025 — In today’s confusing and messy enterprise software market, innovative technology solutions that realize real customer results are hard to come by. As an industry analyst firm that focuses on enterprise digital transformation and the disruptive vendors that support it, Intellyx interacts with numerous innovators in the enterprise IT marketplace.
Alluxio, supplier of open source virtual distributed file systems, announced Alluxio Enterprise AI 3.6. This delivers capabilities for model distribution, model training checkpoint writing optimization, and enhanced multi-tenancy support. It can, we’re told, accelerate AI model deployment cycles, reduce training time, and ensure data access across cloud environments. The new release uses Alluxio Distributed Cache to accelerate model distribution workloads; by placing the cache in each region, model files need only be copied from the Model Repository to the Alluxio Distributed Cache once per region rather than once per server.