How T3Go’s high-performance data lake using Apache Hudi and Alluxio shortened the time for data ingestion into the lake by up to a factor of 2. Data analysts using Presto, Hudi, and Alluxio in conjunction to query data on the lake saw queries speed up by 10 times faster.
ANNOUNCING ALLUXIO 2.4 – ADVANCING HYBRID AND MULTI CLOUD DATA ORCHESTRATION
For Analytics and Machine Learning in any Cloud
Join us at the 2nd annual Data Orchestration Summit and hear from Comcast, Electronic Arts, Facebook and more on modernizing data platforms
Announcing the release of the Data Orchestration Hub
Alluxio enables compute

Data locality
Bring your data close to compute.
Make your data local to compute workloads for Spark caching, Presto caching, Hive caching and more.

Data Accessibility
Make your data accessible.
No matter if it sits on-prem or in the cloud, HDFS or S3, make your files and objects accessible in many different ways.

Data On-Demand
Make your data as elastic as compute.
Effortlessly orchestrate your data for compute in any cloud, even if data is spread across multiple clouds.
“zero-copy” burst user spotlight: walmart

Why Walmart chose Alluxio’s “Zero-Copy” burst solution:
- No requirement to persist data into the cloud
- Improved query performance and no network hops on recurrent queries
- Lower costs without the need for creating data copies
See more on how Alluxio powers Walmart’s “zero-copy” burst solution in their presentation >
Featured Use Cases and Deployments
Managing data copies/app changes when bursting compute to cloud?
Zero-copy hybrid bursting with no app changes to intelligently make remote data accessible in the public cloud.
Expanding compute capacity across geo-distributed data centers?

Zero-copy bursting across data centers for Presto, Spark, and Hive with no app changes on data stored in HDFS.
Interact with Alluxio in any stack
Pick a compute. Pick a storage. Alluxio just works.
-- Pointing Table location to Alluxio CREATE SCHEMA hive.web WITH (location = 'alluxio://master:port/my-table/‘)
// Using Alluxio as input and output for RDD scala> sc.textFile("alluxio://master:19998/Input") scala> rdd.saveAsTextFile("alluxio://master:19998/Output") // Using Alluxio as input and output for Dataframe scala> df = sqlContext.read.parquet("alluxio://master:19998/Input.parquet") scala> df.write.parquet("alluxio://master:19998/Output.parquet”)
-- Pointing Table location to Alluxio hive> CREATE TABLE u_user ( userid INT, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LOCATION 'alluxio://master:port/table_data';
Create and Query table stored in Alluxio hbase(main):001:0> create 'test', 'cf' hbase(main):002:0> list ‘test'
# Accessing Alluxio after mounting Alluxio service to local file system $ ls /mnt/alluxio_mount $ cat /mnt/alluxio_mount/mydata.txt
ALLUXIO
$ ./bin/alluxio fs mount \ --option aws.accessKeyId=<AWS_ACCESS_KEY_ID> \ --option aws.secretKey=<AWS_SECRET_KEY_ID> \ alluxio://master:port/s3 s3a://<S3_BUCKET>/<S3_DIRECTORY>
$ ./bin/alluxio fs mount \ alluxio://master:port/hdfs hdfs://namenode:port/dir/
$ ./bin/alluxio fs mount \ --option fs.azure.account.key.<AZURE_ACCOUNT>.blob.core.windows.net=<AZURE_ACCESS_KEY> \ alluxio://master:port/azure wasb://<AZURE_CONTAINER>@<AZURE_ACCOUNT>.blob.core.windows.net/<AZURE_DIRECTORY>/
$ ./bin/alluxio fs mount \ --option fs.gcs.accessKeyId=<GCS_ACCESS_KEY_ID> \ --option fs.gcs.secretAccessKey=<GCS_SECRET_ACCESS_KEY> \ alluxio://master:port/gcs gs://<GCS_BUCKET>/<GCS_DIRECTORY>
$ ./bin/alluxio fs mount \ --option aws.accessKeyId=<AWS_ACCESS_KEY_ID> \ --option aws.secretKey=<AWS_SECRET_KEY_ID> \ --option alluxio.underfs.s3.endpoint=http://<rgw-hostname>:<rgw-port> \ --option alluxio.underfs.s3.disable.dns.buckets=true \ alluxio://master:port/ceph s3a://<S3_BUCKET>/<S3_DIRECTORY>
$ ./bin/alluxio fs mount alluxio://master:port/nfs /mnt/nfs
Announcing Alluxio 2.0! Learn more about the release >
powered by alluxio
What’s Happening
This blog explores an innovative platform with Presto as the computing engine and Alluxio as a data orchestration layer between Presto and S3 storage, to support online services with instantaneous response within the gaming industry. The preliminary results show that Presto with Alluxio outperforms S3 significantly in all cases.Alluxio with metadata caching shows up to 5.9x performance gain when handling large numbers of small files.
This article described how engineers at datasapiens brought down S3 API costs by 200x by implementing Alluxio as a data orchestration layer between S3 and Presto.
As the third largest e-commerce site in China, Vipshop processes large amounts of data collected daily to generate targeted advertisements for its consumers. In this article, Gang Deng from Vipshop describes how to meet SLAs by improving struggling Spark jobs on HDFS by up to 30x, and optimize hot data access with Alluxio to create … Continued
Migrating SQL workloads from a fully on-premise environment to cloud infrastructure has numerous benefits, including alleviating resource contention and reducing costs by paying for computation resources on an on-demand basis. In the case of Presto running on data stored in HDFS, the separation of compute in the cloud and storage on-premises is apparent since Presto’s … Continued
Alluxio, the developer of open source cloud data orchestration software, today announced it has been named to the Computer Reseller News (CRN) Big Data 100 list – “The Coolest Data Management and Integration Tool Companies,” chosen a 2020 Data Breakthrough Awards “Best Data Access Solution of the Year” winner, and awarded an honorable mention on InsideBIGDATA “IMPACT 50 List for Q2 2020.”