On-Demand Videos

Coupang is a leading e-commerce company in South Korea, with over 50,000 employees and $20+ billion in annual revenue. Coupang's AI platform team builds and manages a large-scale AI platform in AWS for machine learning engineers to train models that enhance and customize product search results and product recommendations for its 100+ million customers.
As the search and recommendation models evolve, optimizing the underlying infrastructure for AI/ML workloads is essential for the e-commerce business. Coupang's platform team actively sought to improve their model training pipeline to boost machine learning engineers' productivity, publish models to production faster, and reduce operational costs.
Coupang focused on addressing several key areas:
- Shortening data preparation and model training time
- Improving GPU utilization in training clusters in different regions
- Reducing S3 API and egress costs incurred from copying large training datasets across regions
- Simplifying the operational complexity of storage system management
In this tech talk, Hyun Jung Baek, Staff Backend Engineer at Coupang, will share best practices for leveraging distributed caching to power search and recommendation model training infrastructure.
Hyun will discuss:
- How Coupang builds a world-class large-scale AI platform for machine learning engineers to deliver better search and recommendation models
- How adding distributed caching to their multi-region AI infrastructure improves GPU utilization, accelerates end-to-end training time, and significantly reduces cross-region data transfer costs.
- How to simplify platform operations and to easily deploy the same architecture to new GPU clusters.
About the Speaker
Hyun Jung Baek is a Staff Backend Engineer at Coupang.
Deepseek’s recent announcement of the Fire-flyer File System (3FS) has sparked excitement across the AI infra community, promising a breakthrough in how machine learning models access and process data.
In this webinar, an expert in distributed systems and AI infrastructure will take you inside Deepseek 3FS, the purpose-built file system for handling large files and high-bandwidth workloads. We’ll break down how 3FS optimizes data access and speeds up AI workloads as well as the design tradeoffs made to maximize throughput for AI workloads.
This webinar you’ll learn about how 3FS works under the hood, including:
✅ The system architecture
✅ Core software components
✅ Read/write flows
✅ Data distribution/placement algorithms
✅ Cluster/node management and disaster recovery
Whether you’re an AI researcher, ML engineer, or infrastructure architect, this deep dive will give you the technical insights you need to determine if 3FS is the right solution for you.
.png)
With the advent of the Big Data era, it is usually computationally expensive to calculate the resource usages of a SQL query. Can we estimate the resource usages of SQL queries more efficiently without any computation in a SQL engine kernel? In this session, Chunxu and Beinan would like to introduce how Twitter’s data platform leverages a machine learning-based approach in Presto and BigQuery to estimate query utilization with 90%+ accuracy.
Streaming systems form the backbone of the modern data pipeline as the stream processing capabilities provide insights on events as they arrive. But what if we want to go further than this and execute analytical queries on this real-time data? That’s where Apache Pinot comes in.
OLAP databases used for analytical workloads traditionally executed queries on yesterday’s data with query latency in the 10s of seconds. The emergence of real-time analytics has changed all this and the expectation is that we should now be able to run thousands of queries per second on fresh data with query latencies typically seen on OLTP databases.
Apache Pinot is a realtime distributed OLAP datastore, which is used to deliver scalable real time analytics with low latency. It can ingest data from streaming sources like Kafka, as well as from batch data sources (S3, HDFS, Azure Data Lake, Google Cloud Storage), and provides a layer of indexing techniques that can be used to maximize the performance of queries.
Come to this talk to learn how you can add real-time analytics capability to your data pipeline.
As more and more companies turn to AI / ML / DL to unlock insight, AI has become this mythical word that adds unnecessary barriers to new adaptors. Oftentimes it was regarded as luxury for those big tech companies only – this should not be the case.
In this talk, Jingwen will first dissect the ML life cycle into five stages – starting from data collection, to data cleansing, model training, model validation, and end at model inference / deployment stages. For each stage, Jingwen will then go over its concept, functionality, characteristics, and use cases to demystify ML operations. Finally, Jingwen will showcase how Alluxio, a virtual data lake, could help simplify each stage.
Alluxio foresaw the need for agility when accessing data across silos separated from compute engines like Spark, Presto, Tensorflow and PyTorch. Embracing the separation of storage from compute, the Alluxio data orchestration platform simplifies adoption of the data lake and data mesh paradigm for analytics and AI/ML. In this talk, Bin Fan will share observations to help identify ways to use the platform to meet the needs of your data environment and workloads.
越來越多的企業架構已轉向混合雲和多雲環境。雖然這種轉變帶來了更大的靈活性和敏捷性,但也意味著必須將計算與存儲分離,這就對企業跨框架、跨雲和跨存儲系統的數據管理和編排提出了新的挑戰。此分享將讓聽眾深入了解Alluxio數據編排理念在數據中台對存儲和計算的解耦作用,以及數據編排針對存算分離場景提出的創新架構,同時結合來自金融、運營商、互聯網等行業的典型應用場景來展現Alluxio如何為大數據計算帶來真正的加速,以及如何將數據編排技術用於AI模型訓練!
*This is a bilingual presentation.
As data stewards and security teams provide broader access to their organization’s data lake environments, having a centralized way to manage fine-grained access policies becomes increasingly important. Alluxio can use Apache Ranger’s centralized access policies in two ways: 1) directly controlling access to virtual paths in the Alluxio virtual file system or 2) enforcing existing access policies for the HDFS under stores. This presentation discusses how the Alluxio virtual filesystem can be integrated with Apache Ranger.
Shopee is the leading e-commerce platform in SouthEast Asia. In this presentation, Tianbao Ding and Haoning Sun from Shopee will share their Data Infra team’s recent project on acceleration with Presto and storage servitization. They will share the details on how Shopee leverages Alluxio to accelerate Presto query and provide standardized method of accessing data through Alluxio-Fuse and Alluxio-S3.
Shawn Sun from Alluxio will present the journey of using Alluxio as the storage system for Kubernetes through Container Storage Interface (CSI) plugin and Alluxio CSI driver. This talk will cover the challenges we are facing with traditional setup in the AI/ML training jobs, and how Alluxio CSI driver manages to address them. It will also talk about a recent change to the driver that made it more sturdy and robust.
This talk will discuss the process and technical details behind a responsible vulnerability disclosure of an issue detected in Alluxio recently. I will share some of the lessons I’ve learned as a security researcher dealing with multiple open-source vendors and my thoughts about the actions organizations and projects should take to ensure successful vulnerability management and disclosure programs. Learn more about creating more secure software.
This presentation will include information about how Alluxio and NetApp StorageGRID helps enterprises accelerate the adoption of cloud and optimize their resource spend on a modern hybrid big data architecture. The conversation will cover use case and architecture info from a variety of enterprises and some of the high level technical details of how these business solutions are constructed.
Chen Liang from Uber and Beinan Wang from Alluxio will present the practical problems and interesting findings during the launch of Alluxio Local Cache. Their talk covers how Uber’s Presto team implements the cache invalidation and dashboard for Alluxio’s Local Cache. Chen Liang will also share his experience using a customized cache filter to resolve the performance degradation due to a large working set.
Within Alluxio, the master processes keep track of global metadata for the file system. This includes file system metadata, block cache metadata, and worker metadata. When a client interacts with the filesystem it must first query or update the metadata on the master processes. Given their central role in the system, master processes can be backed by a highly available, fault tolerant replicated journal. This talk will introduce and compare the two available implementations of this journal in Alluxio, the first using Zookeeper and the more recent version using Raft.
In this talk, Lei Li and Zifan Ni share the experience of applying Alluxio in their AI platform to increase training efficiency at bilibili. The talk also includes technical architecture and specific issues addressed.