Increasingly powerful compute accelerators and large training dataset have made the storage layer a potential bottleneck in deep learning training/inference.
Offline inference job usually consumes and produces tens of tera-bytes data while running more than 10 hours.
For a large-scale job, it usually causes high IO pressure, increase job failure rate, and bring many challenges for system stability.
We adopt alluxio which acts as an intermediate storage tier between the compute tier and cloud storage to optimize IO throughput of deep learning inference job.
For the production workload, the performance improves 18% and we seldom see job failure because of storage issue.
ALLUXIO DAY III 2021
April 27, 2021
Increasingly powerful compute accelerators and large training dataset have made the storage layer a potential bottleneck in deep learning training/inference.
Offline inference job usually consumes and produces tens of tera-bytes data while running more than 10 hours.
For a large-scale job, it usually causes high IO pressure, increase job failure rate, and bring many challenges for system stability.
We adopt alluxio which acts as an intermediate storage tier between the compute tier and cloud storage to optimize IO throughput of deep learning inference job.
For the production workload, the performance improves 18% and we seldom see job failure because of storage issue.
Video:
Presentation Slides:
ALLUXIO DAY III 2021
April 27, 2021
Increasingly powerful compute accelerators and large training dataset have made the storage layer a potential bottleneck in deep learning training/inference.
Offline inference job usually consumes and produces tens of tera-bytes data while running more than 10 hours.
For a large-scale job, it usually causes high IO pressure, increase job failure rate, and bring many challenges for system stability.
We adopt alluxio which acts as an intermediate storage tier between the compute tier and cloud storage to optimize IO throughput of deep learning inference job.
For the production workload, the performance improves 18% and we seldom see job failure because of storage issue.
Video:
Presentation Slides:
Videos:
Presentation Slides:
Complete the form below to access the full overview:
Videos
In the rapidly evolving landscape of AI and machine learning, Platform and Data Infrastructure Teams face critical challenges in building and managing large-scale AI platforms. Performance bottlenecks, scalability of the platform, and scarcity of GPUs pose significant challenges in supporting large-scale model training and serving.
In this talk, we introduce how Alluxio helps Platform and Data Infrastructure teams deliver faster, more scalable platforms to ML Engineering teams developing and training AI models. Alluxio’s highly-distributed cache accelerates AI workloads by eliminating data loading bottlenecks and maximizing GPU utilization. Customers report up to 4x faster training performance with high-speed access to petabytes of data spread across billions of files regardless of persistent storage type or proximity to GPU clusters. Alluxio’s architecture lowers data infrastructure costs, increases GPU utilization, and enables workload portability for navigating GPU scarcity challenges.
In this talk, Zhe Zhang (NVIDIA, ex-Anyscale) introduced Ray and its applications in the LLM and multi-modal AI era. He shared his perspective on ML infrastructure, noting that it presents more unstructured challenges, and recommended using Ray and Alluxio as solutions for increasingly data-intensive multi-modal AI workloads.
As large-scale machine learning becomes increasingly GPU-centric, modern high-performance hardware like NVMe storage and RDMA networks (InfiniBand or specialized NICs) are becoming more widespread. To fully leverage these resources, it’s crucial to build a balanced architecture that avoids GPU underutilization. In this talk, we will explore various strategies to address this challenge by effectively utilizing these advanced hardware components. Specifically, we will present experimental results from building a Kubernetes-native distributed caching layer, utilizing NVMe storage and high-speed RDMA networks to optimize data access for PyTorch training.