On-Demand Videos
In this talk, Ojus Save walks you through a demo of how to build AI applications on Zoom. This demo shows you an AI agent that receives transcript data from RTMS and then decides if it has to create action items based on the transcripts that are received.
In this talk, Sandeep Joshi, , Senior Manager at NVIDIA, shares how to accelerate the data access between GPU and storage for AI. Sandeep will dive into two options: CPU- initiated GPUDirect Storage and GPU-initiated SCADA.
Bin Fan, VP of Technology at Alluxio, introduces how Alluxio, a software layer transparently sits between application and S3 (or other object stores), provides sub-ms time to first byte (TTFB) solution, with up to 45x lower latency.
.png)
In this session, Adit Madan, Director of Product Management at Alluxio, presents an overview of using distributed caching to accelerate model training and serving. He explores the requirements of data access patterns in the ML pipeline and offer practical best practices for using distributed caching in the cloud. This session features insights from real-world examples, such as AliPay, Zhihu, and more.
Machine learning models power Uber’s everyday business. However, developing and deploying a model is not a one-time event but a continuous process that requires careful planning, execution, and monitoring. In this session, Sally (Mihyong) Lee, Senior Staff Engineer & TLM @ Uber, highlights Uber’s practice on the machine learning lifecycle to ensure high model quality.
In this talk, Wanchao Liang, Software Engineer at Meta Pytorch Team, explores the technology advancements of PyTorch Distributed, and dives into the details of how multi-dimensional parallelism is made possible to train Large Language Models by composing different PyTorch native distributed training APIs.
ChatGPT and other massive models represents an amazing step forward in AI, yet they do not solve real-world business problems. In this session, Jordan Plawner, Global Director of Artificial Intelligence Product Manager and Strategy at Intel, surveys how the AI ecosystem has worked non-stop over this last year to take these all-purpose multi-task models and optimize them to they can be used by organizations to address domain specific problems. He explains these new AI-for-the-real world techniques and methods such as fine tuning and how they can be applied to deliver results which are highly performant with state-of-the-art accuracy while also being economical to build and deploy everywhere to enhance products and services.
This hands-on session discusses best practices for using PyTorch and Alluxio during model training on AWS. Shawn and Lu provide a step-by-step demonstration of how to use Alluxio on EKS as a distributed cache to accelerate computer vision model training jobs that read datasets from S3. This architecture significantly improves the utilization of GPUs from 30% to 90%+, archives ~5x faster training, and lower cloud storage costs.
As enterprises race to roll out artificial intelligence, often overlookModel training requires extensive computational and GPU resources. When training models on AWS, loading data from S3 often becomes a major bottleneck, wasting valuable GPU cycles. Optimizing data loading can greatly reduce GPU idle time and increase GPU utilization.
In this webinar, Greg Palmer will discuss best practices for efficient data loading during model training on AWS. He will demonstrate how to use Alluxio on EKS as a distributed cache to accelerate PyTorch training jobs that read datasets from S3. This architecture significantly improves the utilization of GPUs from 30% to 90%+, archives ~5x faster training, and lower cloud storage costs.
What you will learn:
- The challenges of feeding data-hungry GPUs in the cloud
- How to accelerate model training by optimizing data loading on AWS
- The reference architecture for running PyTorch jobs with Alluxio cache on EKS while reading data from S3, with benchmark results of training ResNet50 and BERT
- How to use TensorBoard to identify bottlenecks in GPU utilization
As enterprises race to roll out artificial intelligence, often overlooked are the infrastructure needs to support scalable ML model development and deployment. Efforts to effectively access and utilize GPUs often lead to extensive data engineering managing data copies or specialized storage, leading to out-of-control cloud and infrastructure costs.
To address the challenges, enterprises need a new data access layer to connect compute engines to data stores wherever they reside in distributed environments.
Join this webinar with Kevin Petrie, Eckerson Group VP of Research, and Sridhar Venkatesh, Alluxio SVP of Product, to explore tools, techniques, and best practices to remove data access bottlenecks and accelerate AI/ML model training. You will learn:
- Modern requirements for AI/ML model training and data engineering
- The challenges of GPU utilization in machine learning and the need for specialized hardware
- How a new data access layer connects compute to data stores across environments
- Best practices for optimizing ML training and guiding principles for success
Organizations are retooling their enterprise data infrastructure in the race for AI/ML. However, growing datasets, extensive data engineering overhead, high GPU costs, and expensive specialized storage can make it difficult to get fast results from model development.
The data access layer is the key to accelerating your path to AI/ML. In this webinar, Roland Theron, Senior Solutions Engineer at Alluxio, discusses how the data access layer can help you:
- Build AI architecture on your existing data lake without the need for specialized hardware.
- Streamline the time-consuming process of managing data copies in data engineering.
- Speed up training workloads with high GPU utilization.
- Achieve optimal concurrency to deliver models to inference clusters for demanding applications
Join us with David Loshin, President of Knowledge Integrity, and Sridhar Venkatesh, SVP of Product at Alluxio, to learn more about the infrastructure hurdles associated with AI/ML model training and deployment and how to overcome them. Topics include:
- The challenges of AI and model training
- GPU utilization in machine learning and the need for specialized hardware
- Managing data access and maintaining a source of truth in data lakes
- Best practices for optimizing ML training
When training models on ultra-large datasets, one of the biggest challenges is low GPU utilization. These powerful processors are often underutilized due to inefficient I/O and data access. This mismatch between computation and storage leads to wasted GPU resources, low performance, and high cloud storage costs. The rise of generative AI and GPU scarcity is only making this problem worse.
In this webinar, Tarik and Beinan discuss strategies for transforming idle GPUs into optimal powerhouses. They will focus on cost-effective management of ultra-large datasets for AI and analytics.
What you will learn:
- The challenges of I/O stalls leading to low GPU utilization for model training
- High-performance, high-throughput data access (I/O) strategies
- The benefits of using an on-demand data access layer over your storage
- How Uber addresses managing ultra-large datasets using high-density storage and caching
As the AI landscape rapidly evolves, the advancements in generative AI technologies, such as ChatGPT, are driving a need for robust data infrastructures tailored for large language model (LLM) training and inference in the cloud. To effectively leverage the breakthroughs in LLM, organizations must ensure low latency, high concurrency, and scalability in production environments.
In this Alluxio-hosted webinar, Shouwei presented on the design and implementation of a distributed caching system that addresses the I/O challenges of LLM training and inference. He explored the unique requirements of data access patterns and offer practical best practices for optimizing the data pipeline through distributed caching in the cloud. The session featured insights from real-world examples, such as Microsoft, Tencent, and Zhihu, as well as from the open-source community. Watch this recording to get a deeper understanding of how to harness scalable, efficient, and robust data infrastructures for LLM training and inference.
Shawn Sun, Alluxio’s software engineer, shares how to get started with Alluxio on Kubernetes in April’s Product School Webinar.
To simplify the DevOps of the stack of Alluxio with a query engine, Alluxio has provided two ways to deploy on Kubernetes, helm and operator. They significantly simplify the deployment, configuration, and life cycle management of resources on Kubernetes.
Through this webinar, you will learn step-by-step how to deploy and run Alluxio on Kubernetes to accelerate analytics workloads.