In this presentation, Vitaliy Baklikov from DBS Bank and Dipti Borkar from Alluxio will share how DBS Bank has built a modern big data analytics stack leveraging an object store as persistent storage even for data-intensive workloads and how it uses Alluxio to orchestrate data locality and data access for Spark workloads. In addition, deploying Alluxio to access data, solves many challenges that cloud deployments bring with separated compute and storage.
Bring your data to compute with open source
Data orchestration for analytics and machine learning in the cloud
Alluxio enables compute
Bring your data close to compute.
Make your data local to compute workloads for Spark caching, Presto caching, Hive caching and more.
Make your data accessible.
No matter if it sits on-prem or in the cloud, HDFS or S3, make your files and objects accessible in many different ways.
Make your data as elastic as compute.
Effortlessly orchestrate your data for compute in any cloud, even if data is spread across multiple clouds.
alluxio for data engineers
Are your Presto/Spark queries slow on S3?
Do your Presto/Spark queries have inconsistent performance?
Are your metadata operations slow on S3?
Are your egress costs too high?
alluxio for data architects
Can you share data across your app framework?
Do you have problems running remote/multiple storage systems?
Is running HDFS in the cloud for temporary storage expensive?
Do you have the directive to use cloud for analytics?
Interact with Alluxio in any stack
Pick a compute. Pick a storage. Alluxio just works.
// Using Alluxio as input and output for RDD scala> sc.textFile("alluxio://master:19998/Input") scala> rdd.saveAsTextFile("alluxio://master:19998/Output") // Using Alluxio as input and output for Dataframe scala> df = sqlContext.read.parquet("alluxio://master:19998/Input.parquet") scala> df.write.parquet("alluxio://master:19998/Output.parquet”)
-- Pointing Table location to Alluxio hive> CREATE TABLE u_user ( userid INT, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LOCATION 'alluxio://master:port/table_data';
Create and Query table stored in Alluxio hbase(main):001:0> create 'test', 'cf' hbase(main):002:0> list ‘test'
# Accessing Alluxio after mounting Alluxio service to local file system $ ls /mnt/alluxio_mount $ cat /mnt/alluxio_mount/mydata.txt
Featured Use Cases and Deployments
Get in-memory access caching Spark and Presto data on AWS S3, Google Cloud Platform, or Microsoft Azure.
Simplify Hadoop for the hybrid cloud by making on-prem HDFS accessible to any compute in the cloud.
Accelerate your Spark, Presto, and Tensorflow workloads for object stores on-premise or in the cloud.
powered by alluxio
Hear about Bazaarvoice’s use case leveraging Apache Spark, Hive, and Alluxio on S3. And learn how to set up Hive with Alluxio so that Hive jobs can seamlessly read/write to S3.
Named to “Top 10 Big Data Startups of 2019 List,” “Top 10 Coolest Enterprise Cloud Services of 2019 List” and “IMPACT 50 List for Q3 2019”
Here in New York, at the AWS Summit, we are super excited to announce that Alluxio 2.0 is here, our most major release since the Alluxio launch. A couple months ago, we released 2.0 Preview – which included some of the capabilities, but 2.0 now includes even more, to continue building on to our data orchestration approach for the cloud.
Cloud has changed the dynamics of data engineering in many ways, from changing expectations of on-demand platform services to the popularity of the object … Continued