November 7 Event to Be Held in Mountain View, CA; Call for Papers Is Open
SAN MATEO, CA – September 18, 2019 - Alluxio, the developer of open source data orchestration technology used by seven of the world’s top ten Internet companies, today announced it will host the inaugural Data Orchestration Summit on Thursday, November 7, 2019 at the Computer History Museum in Mountain View, CA. The one-day event brings together data engineers, cloud engineers, data scientists, AI engineers, and industry thought leaders who are solving data problems at the intersection of cloud, AI, data, and orchestration.
“We’re very excited to announce and host the first Data Orchestration Summit,” said Haoyuan Li, founder and CTO of Alluxio. “This event will bring together practitioners and leaders in these areas to share their experiences and learnings in building their cloud native analytics and machine learning platforms using data orchestration technology.”
The rise of the cloud and advances in open source data technologies like Alluxio, Apache Spark, Kubernetes, Presto and Tensorflow are extensively transforming how organizations build modern data analytics and AI platforms. Attendees will learn from companies such as Netflix, Development Bank of Singapore, and Tencent about their data architectures and see real-world use cases, live demos, and practitioner best practices. This event also brings together creators of open source technologies and leaders in cloud to discuss the latest solutions to today’s biggest data problems. Other speakers hail from leading companies in data and analytics including Amazon Web Services, Starburst, Presto Company, O’Reilly and many more.
Call for Papers
Alluxio is inviting practitioners and thought leaders to submit speaking proposals at https://bit.ly/2kp1ng3.
To register for Data Orchestration Summit, visit here: https://www.alluxio.io/data-orchestration-summit-2019/#
Tweet this: .@Alluxio announces inaugural #DataOrchestrationSummit #cloud #AI #Data https://bit.ly/2lYkGxe
About Alluxio
Alluxio, a leading provider of the high performance data platform for analytics and AI, accelerates time-to-value of data and AI initiatives and maximizes infrastructure ROI. Uniquely positioned at the intersection of compute and storage systems, Alluxio has a universal view of workloads on the data platform across stages of a data pipeline. This enables Alluxio to provide high performance data access regardless of where the data resides, simplify data engineering, optimize GPU utilization, and reduce cloud and storage costs. With Alluxio, organizations can achieve magnitudes faster model training and serving without the need for specialized storage, and build AI infrastructure on existing data lakes. Backed by leading investors, Alluxio powers technology, internet, financial services, and telecom companies, including 9 out of the top 10 internet companies globally. To learn more, visit www.alluxio.io.
Media Contact:
Beth Winkowski
Winkowski Public Relations, LLC for Alluxio
978-649-7189
beth@alluxio.com
.png)
News & Press
Alluxio, supplier of open source virtual distributed file systems, announced Alluxio Enterprise AI 3.6. This delivers capabilities for model distribution, model training checkpoint writing optimization, and enhanced multi-tenancy support. It can, we’re told, accelerate AI model deployment cycles, reduce training time, and ensure data access across cloud environments. The new release uses Alluxio Distributed Cache to accelerate model distribution workloads; by placing the cache in each region, model files need only be copied from the Model Repository to the Alluxio Distributed Cache once per region rather than once per server.
Alluxio announced the release of Alluxio Enterprise AI 3.6, delivering advanced capabilities for model distribution, model training checkpoint writing optimization, and enhanced multi-tenancy support. This latest version enables organizations to accelerate AI model deployment cycles, reduce training time, and ensure seamless data access across cloud environments. New features in this release include high-performance model distribution, fast model training checkpoint writing, a new management console, and more.