Speed up large-scale ML/DL offline inference job with Alluxio

Tags: , , , , ,

We adopt alluxio which acts as an intermediate storage tier between the compute tier and cloud storage to optimize IO throughput of deep learning inference job. For the production workload, the performance improves 18% and we seldom see job failure because of storage issue.