Reducing large S3 API costs using Alluxio at Datasapiens

Alluxio Global Online Meetup *

In this talk, we will describe how we have solved an issue with large S3 API costs incurred by Presto under several usage concurrency levels by implementing Alluxio as a data orchestration layer between S3 and Presto. Also, we will show the results of an experiment with estimating the per-query S3 API costs using the TPC-DS dataset.

Tech Talk: Build a hybrid data lake and burst processing to Google Cloud Dataproc with Alluxio

Join us for this tech talk where we will show you how Alluxio can help burst your private computing environment to Google Cloud, minimizing costs and I/O overhead. Alluxio coupled with Google’s open source data and analytics processing engine, Dataproc, enables zero-copy burst for faster query performance in the cloud so you can take advantage of resources that are not local to your data, without the need for managing the copying or syncing of that data.

Tags: , , , ,

Tech Talk: From limited Hadoop compute capacity to increased data scientist efficiency

Want to leverage your existing investments in Hadoop with your data on-premise and still benefit from the elasticity of the cloud? 

Like other Hadoop users, you most likely experience very large and busy Hadoop clusters, particularly when it comes to compute capacity. Bursting HDFS data to the cloud can bring challenges – network latency impacts performance, copying data via DistCP means maintaining duplicate data, and you may have to make application changes to accomodate the use of S3. 

“Zero-copy” hybrid bursting with Alluxio keeps your data on-prem and syncs data to compute in the cloud so you can expand compute capacity, particularly for ephemeral Spark jobs. 

Tags: , , , ,

What do I do if Hadoop is slow?

So you have a Hadoop cluster that’s running fine and then you start to hear people saying that their jobs are running slow. This answer is meant to cover common reasons for slowness and look at some solutions to this problem.