On-Demand Videos

Coupang is a leading e-commerce company in South Korea, with over 50,000 employees and $20+ billion in annual revenue. Coupang's AI platform team builds and manages a large-scale AI platform in AWS for machine learning engineers to train models that enhance and customize product search results and product recommendations for its 100+ million customers.
As the search and recommendation models evolve, optimizing the underlying infrastructure for AI/ML workloads is essential for the e-commerce business. Coupang's platform team actively sought to improve their model training pipeline to boost machine learning engineers' productivity, publish models to production faster, and reduce operational costs.
Coupang focused on addressing several key areas:
- Shortening data preparation and model training time
- Improving GPU utilization in training clusters in different regions
- Reducing S3 API and egress costs incurred from copying large training datasets across regions
- Simplifying the operational complexity of storage system management
In this tech talk, Hyun Jung Baek, Staff Backend Engineer at Coupang, will share best practices for leveraging distributed caching to power search and recommendation model training infrastructure.
Hyun will discuss:
- How Coupang builds a world-class large-scale AI platform for machine learning engineers to deliver better search and recommendation models
- How adding distributed caching to their multi-region AI infrastructure improves GPU utilization, accelerates end-to-end training time, and significantly reduces cross-region data transfer costs.
- How to simplify platform operations and to easily deploy the same architecture to new GPU clusters.
About the Speaker
Hyun Jung Baek is a Staff Backend Engineer at Coupang.
Deepseek’s recent announcement of the Fire-flyer File System (3FS) has sparked excitement across the AI infra community, promising a breakthrough in how machine learning models access and process data.
In this webinar, an expert in distributed systems and AI infrastructure will take you inside Deepseek 3FS, the purpose-built file system for handling large files and high-bandwidth workloads. We’ll break down how 3FS optimizes data access and speeds up AI workloads as well as the design tradeoffs made to maximize throughput for AI workloads.
This webinar you’ll learn about how 3FS works under the hood, including:
✅ The system architecture
✅ Core software components
✅ Read/write flows
✅ Data distribution/placement algorithms
✅ Cluster/node management and disaster recovery
Whether you’re an AI researcher, ML engineer, or infrastructure architect, this deep dive will give you the technical insights you need to determine if 3FS is the right solution for you.
.png)
Shawn Sun, Alluxio’s software engineer, shares how to get started with Alluxio on Kubernetes in April’s Product School Webinar.
To simplify the DevOps of the stack of Alluxio with a query engine, Alluxio has provided two ways to deploy on Kubernetes, helm and operator. They significantly simplify the deployment, configuration, and life cycle management of resources on Kubernetes.
Through this webinar, you will learn step-by-step how to deploy and run Alluxio on Kubernetes to accelerate analytics workloads.
In March’s Product School session, Beinan, an Alluxio tech lead, Presto committer, and Trino contributor, shares expert tips for tuning Trino performance. In addition, he demonstrates how to integrate Trino with Alluxio as a caching layer using connectors for Hive, Iceberg, Hudi, or Delta Lake.
In February’s product school, Greg Palmer, Lead Solution Engineer at Alluxio, will present a live demo featuring Transparent URI, a key feature in Alluxio Enterprise Edition which provides ease of integration of Alluxio with your existing data stack without any changes to the location metadata of the Hive Metastore. Join us to learn the configurations and other advanced settings for employing Transparent URI to simplify DevOps of Alluxio implementation, allowing users to access their existing storage systems without changing URIs at application level.
In November’s Product School, Adit Madan, Director of Product Management at Alluxio, will highlights new features, enhanced manageability, improved security and performance in Alluxio 2.9 release.
Today, data engineering in modern enterprises has become increasingly more complex and resource-consuming, particularly because (1) the rich amount of organizational data is often distributed across data centers, cloud regions, or even cloud providers, and (2) the complexity of the big data stack has been quickly increasing over the past few years with an explosion in big-data analytics and machine-learning engines (like MapReduce, Hive, Spark, Presto, Tensorflow, PyTorch to name a few).
To address these challenges, it is critical to provide a single and logical namespace to federate different storage services, on-prem or cloud-native, to abstract away the data heterogeneity, while providing data locality to improve the computation performance. [Bin Fan] will share his observation and lessons learned in designing, architecting, and implementing such a system – Alluxio open-source project — since 2015.
Alluxio originated from UC Berkeley AMPLab (used to be called Tachyon) and was initially proposed as a daemon service to enable Spark to share RDDs across jobs for performance and fault tolerance. Today, it has become a general-purpose, high-performance, and highly available distributed file system to provide generic data service to abstract away complexity in data and I/O. Many companies and organizations today like Uber, Meta, Tencent, Tiktok, Shopee are using Alluxio in production, as a building block in their data platform to create a data abstraction and access layer. We will talk about the journey of this open source project, especially in its design challenges in tiered metadata storage (based on RocksDB), embedded state-replicate machine (based on RAFT) for HA, and evolution in RPC framework (based on gRPC) and etc.
Distributed systems are made up of many components such as authentication, a persistence layer, stateless services, load balancers, and stateful coordination services. These coordination services are central to the operation of the system, performing tasks such as maintaining system configuration state, ensuring service availability, name resolution, and storing other system metadata. Given their central role in the system it is essential that these systems remain available, fault tolerant and consistent. By providing a highly available file system-like abstraction as well as powerful recipes such as leader election, Apache Zookeeper is often used to implement these services. This talk will go over a generic example of stateful coordination service moving from Zookeeper to Raft.
Data platform teams are increasingly challenged with accessing multiple data stores that are separated from compute engines, such as Spark, Presto, TensorFlow or PyTorch. Whether your data is distributed across multiple datacenters and/or clouds, a successful heterogeneous data platform requires efficient data access.
In October’s Product School, Alluxio’s Lead Solutions Engineer Greg Palmer will present and demo how Alluxio enables you to embrace the cloud migration strategy or multi-cloud architecture for large-scale analytics and AI workloads. Alluxio also helps scale out your platform adoption for analytics and AI across multiple tenants and applications teams.
In this presentation, Yingjun Wu, Founder @ RisingWave Labs will talk about the birth, the growth, and the prosperity of modern data stack. I will show you why modern data stack is more than a buzzword, and how it will possibly evolve in the next couple of years.
Apache Hudi’s open-source community is very active and healthy. In this talk, an overview of community-driven major features will be presented, followed by a deep-dive into two of those features, metastore and table management service, driven by Bytedance to illustrate Hudi’s platform vision.
Shopee is the leading e-commerce platform in SouthEast Asia. In this presentation, Luo Li from Shopee will share their Data Infra team’s recent project on acceleration with Presto and storage servitization. He will share the details on how Shopee leverages Alluxio to accelerate Presto query and provide standardized methods of accessing data through Alluxio-Fuse and Alluxio-S3.
OceanBase Database, is an open-source, distributed Hybrid Transactional/Real-time Operational Analytics (HTAP) database management system that has set new world records in both the TPC-C and TPC-H benchmark tests. OceanBase Database starts from 2010, and it has been serving all of the critical systems in Alipay. Besides Alipay, OceanBase has also been serving customer from a variety of sectors, including Internet, financial services, telecommunications and retail industry.
In this tech talk, we will talk about the architecture of OceanBase and some typical use cases. This talk will include some technical topic such as Paxos replication, 2PC commit, LSM-Tree like storage, SQL optimizer and executor, city-level disaster recovery, etc.
This talk introduces the three game level progressions to use Alluxio to speed up your cloud training with production use cases from Microsoft, Alibaba, and BossZhipin.
- Level 1: Speed up data ingestion from cloud storage
- Level 2: Speed up data preprocessing and training workloads
- Level 3: Speed up full training workloads with a unified data orchestration layer