• Latest Blog:
  • Search
  • Sign In
  • Blog
  • Docs
  • EN
  • GitHub
  • Slack
Alluxio
  • Why Alluxio?
  • Product
    • Alluxio Overview
    • Alluxio on AWS
    • Alluxio on GCP
    • Trino with Alluxio
    • Presto with Alluxio
    • Spark with Alluxio
    • Alluxio + Intel
    • Alluxio + NetApp
  • Use Cases
    • Zero-copy Hybrid Bursting
    • Zero-copy Burst Across Datacenters
    • Cloud Analytics Caching
    • Accelerated Workloads for Object Stores
  • Community
    • Alluxio Community
    • Powered by Alluxio
    • Data Orchestration Summit
    • Alluxio Day
    • Product School
    • Meetups & Conferences
    • Newsletter
  • Resources
    • Downloads
    • Documentation
    • FAQ
    • Learning Center
    • Videos
    • Tech Talks
    • Slides from Talks
    • White Papers
    • Case Studies
    • Solution Briefs
    • Events
  • Company
    • About
    • Careers
    • News & Press
    • Awards
    • Partners
  • Enterprise
    • Editions
    • Pricing
    • Contact Us
  • Get Started
  • Contact Us

Qianxi Zhang

Speed up Large-scale ML/DL Offline Inference Jobs with Alluxio at Microsoft Bing

January 6, 2022 By Binyang Li and Qianxi Zhang

Running inference at scale is challenging. In this blog, we will share our observations and the practice to use Alluxio to speed up the I/O performance for large-scale ML/DL offline inference at Microsoft Bing. 

  • Resources
    • Blog
    • White Papers
    • Tech Talks
    • Case Studies
    • Events
    • Slides from talks
    • Videos
  • Open Source
    • Community
    • Download
    • Mailing List
    • Slack
    • Powered By Alluxio
    • Newsletter
  • Support
    • Documentation
    • Account Sign In
    • Pricing
    • Services & Support
    • Contact Us
  • Company
    • About
    • Careers
    • News & Press
    • Awards
    • Partners

© Copyright 2023 Alluxio, Inc. All rights reserved.
Alluxio is a trademark of Alluxio, Inc.
Terms of Service | Privacy Policy

Newsletter Signup