Bring your data to compute with open source

Data orchestration for analytics and machine learning in the cloud

Announcing the first Data Orchestration Summit! Join us on November 7th >

Alluxio enables compute

Data locality

Bring your data close to compute.
Make your data local to compute workloads for Spark caching, Presto caching, Hive caching and more.

Data Accessibility

Make your data accessible.
No matter if it sits on-prem or in the cloud, HDFS or S3, make your files and objects accessible in many different ways.

Data On-Demand

Make your data as elastic as compute.
Effortlessly orchestrate your data for compute in any cloud, even if data is spread across multiple clouds.

alluxio for data engineers

Are your Presto/Spark queries slow on S3?

Do your Presto/Spark queries have inconsistent performance?

Are your metadata operations slow on S3?

Are your egress costs too high?

SEE how alluxio helps >

alluxio for data architects

Can you share data across your app framework?

Do you have problems running remote/multiple storage systems?

Is running HDFS in the cloud for temporary storage expensive?

Do you have the directive to use cloud for analytics?

SEE how alluxio helps >

Interact with Alluxio in any stack

Pick a compute. Pick a storage. Alluxio just works.

Tutorial –> Full Docs –>

-- Pointing Table location to Alluxio 
CREATE SCHEMA hive.web
WITH (location = 'alluxio://master:port/my-table/‘)

Full Docs

// Using Alluxio as input and output for RDD
scala> sc.textFile("alluxio://master:19998/Input")             
scala> rdd.saveAsTextFile("alluxio://master:19998/Output")

// Using Alluxio as input and output for Dataframe
scala> df = sqlContext.read.parquet("alluxio://master:19998/Input.parquet")
scala> df.write.parquet("alluxio://master:19998/Output.parquet”)

Full Docs

-- Pointing Table location to Alluxio
hive> CREATE TABLE u_user (
userid INT,
age INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
LOCATION 'alluxio://master:port/table_data';

Full Docs

Create and Query table stored in Alluxio
hbase(main):001:0> create 'test', 'cf'
hbase(main):002:0> list ‘test'

Full Docs

# Running a wordcount using Alluxio as input and output
$ bin/hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount \
  -libjars /<ALLUXIO_HOME>/client/alluxio-<VERSION>-client.jar \
  alluxio://master:19998/wordcount/input.txt \ 
  alluxio://master:19998/wordcount/output

Full Docs

# Accessing Alluxio after mounting Alluxio service to local file system
$ ls /mnt/alluxio_mount
$ cat /mnt/alluxio_mount/mydata.txt
ALLUXIO
$ ./bin/alluxio fs mount \
--option aws.accessKeyId=<AWS_ACCESS_KEY_ID> \
--option aws.secretKey=<AWS_SECRET_KEY_ID> \
alluxio://master:port/s3 s3a://<S3_BUCKET>/<S3_DIRECTORY>

Full Docs

$ ./bin/alluxio fs mount \
alluxio://master:port/hdfs hdfs://namenode:port/dir/

Full Docs

$ ./bin/alluxio fs mount \
--option
fs.azure.account.key.<AZURE_ACCOUNT>.blob.core.windows.net=<AZURE_ACCESS_KEY> \
alluxio://master:port/azure 
wasb://<AZURE_CONTAINER>@<AZURE_ACCOUNT>.blob.core.windows.net/<AZURE_DIRECTORY>/

Full Docs

$ ./bin/alluxio fs mount \
--option fs.gcs.accessKeyId=<GCS_ACCESS_KEY_ID> \
--option fs.gcs.secretAccessKey=<GCS_SECRET_ACCESS_KEY> \
alluxio://master:port/gcs gs://<GCS_BUCKET>/<GCS_DIRECTORY>

Full Docs

$ ./bin/alluxio fs mount \
--option aws.accessKeyId=<AWS_ACCESS_KEY_ID> \
--option aws.secretKey=<AWS_SECRET_KEY_ID> \
--option alluxio.underfs.s3.endpoint=http://<rgw-hostname>:<rgw-port> \
--option alluxio.underfs.s3.disable.dns.buckets=true \
alluxio://master:port/ceph s3a://<S3_BUCKET>/<S3_DIRECTORY>

Full Docs

$ ./bin/alluxio fs mount alluxio://master:port/nfs /mnt/nfs
 
 
 
 
 

Full Docs

Featured Use Cases and Deployments

Data in the public cloud slowing your compute down?

Get in-memory access caching Spark and Presto data on AWS S3, Google Cloud Platform, or Microsoft Azure.

Can’t burst HDFS in your hybrid cloud environment?

Simplify Hadoop for the hybrid cloud by making on-prem HDFS accessible to any compute in the cloud.

Data in on-premise object stores not fast enough?

Accelerate your Spark, Presto, and Tensorflow workloads for object stores on-premise or in the cloud.

Announcing Alluxio 2.0! Learn more about the release >

powered by alluxio

What’s Happening

Event
Enabling Big Data and AI workloads on the Object Store at DBS Bank

In this presentation, Vitaliy Baklikov from DBS Bank and Dipti Borkar from Alluxio will share how DBS Bank has built a modern big data analytics stack leveraging an object store as persistent storage even for data-intensive workloads and how it uses Alluxio to orchestrate data locality and data access for Spark workloads. In addition, deploying Alluxio to access data, solves many challenges that cloud deployments bring with separated compute and storage.

Strata Data Conference New York *
Event
Accelerating Hive with Alluxio on S3

Hear about Bazaarvoice’s use case leveraging Apache Spark, Hive, and Alluxio on S3. And learn how to set up Hive with Alluxio so that Hive jobs can seamlessly read/write to S3.

Alluxio Community Office Hour *
Blog
2.0 is here! Embrace silos, orchestrate data, accelerate innovation!

Here in New York, at the AWS Summit, we are super excited to announce that Alluxio 2.0 is here, our most major release since the Alluxio launch. A couple months ago, we released 2.0 Preview – which included some of the capabilities, but 2.0 now includes even more, to continue building on to our data orchestration approach for the cloud.

Slides from our latest talks
Accelerate Spark Workloads on S3

This webinar highlights a simple solution is to run Spark on Alluxio as a distributed cache for S3. Alluxio stores data in memory close … Continued