Cloud-native model training jobs require fast data access to achieve shorter training cycles. Accessing data can be challenging when your datasets are distributed across different regions and clouds. Additionally, as GPUs remain scarce and expensive resources, it becomes more common to set up remote training clusters from where data resides. This multi-region/cloud scenario introduces the challenges of losing data locality, resulting in operational overhead, latency and expensive cloud costs.
In the third webinar of the multi-cloud webinar series, Chanchan and Shawn dive deep into:
- The data locality challenges in the multi-region/cloud ML pipeline
- Using a cloud-native distributed caching system to overcome these challenges
- The architecture and integration of PyTorch/Ray+Alluxio+S3 using POSIX or RESTful APIs
- Live demo with ResNet and BERT benchmark results showing performance gains and cost savings analysis
Shawn Sun is a Tech Lead of Cloud Native at Alluxio. He is an open-source contributor of Alluxio and a committer of Fluid. He is currently working on containerization of Alluxio, including the integration of Alluxio and docker, Kubernetes, and CSI. Before joining Alluxio, he received his Master’s degree in Computer Science from Duke University.
ChanChan Mao is a Developer Advocate at Alluxio. She holds a Bachelor’s degree in Computer Science from UC Santa Barbara and has turned her technical background towards growing and supporting the open source community. She focuses on fostering relationships with open source users and raising awareness of Alluxio’s brand and technology through maintaining Alluxio’s Slack community, producing short form video content, and organizing events with adjacent ecosystem communities.