Hybrid Environments for Data Analytics is a Possibility

As the data ecosystem becomes massively complex and more and more disaggregated, data analysts and end users have trouble adapting and working with hybrid environments. The proliferation of compute applications along with storage mediums leads to a hybrid model that we are just not accustomed to.
With this disaggregated system data engineers now come across a multitude of problems that they must overcome in order to get meaningful insights.

RocksDB Meetup at Twitter

Bay Area Meetup *

Twitter SF is hosting 2019’s half yearly RocksDB Meetup with speakers from Twitter, Facebook and the community on July 11th.

Alluxio on EMR: Fast Storage Access and Sharing for Spark Jobs

Traditionally, if you want to run a single Spark job on EMR, you might follow the steps: launching a cluster, running the job which reads data from storage layer like S3, performing transformations within RDD/Dataframe/Dataset, finally, sending the result back to S3. You end up having something like this.
If we add more Spark jobs across multiple clusters, you could have something like this.

Running Spark & Alluxio in Kubernetes

Alluxio Community Office Hour *

The latest advances in container orchestration by Kubernetes bring cost savings and flexibility to compute workloads in public or hybrid cloud environments. On the other hand, it introduces new challenges such as how to move data to compute efficiently, how to unify data across multiple or remote clouds, how to co-locate data with compute and many more. Alluxio approaches these problems in a new way. It helps elastic compute workloads realize the true benefits of the cloud, while bringing data locality and data accessibility to workloads orchestrated by Kubernetes

Evolution of big data stacks under computational and storage separation architecture

Shanghai *

A new generation of open source big data, represented by Alluxio, born at the University of California at Berkeley, looks at this issue. Different from systems such as designing storage tight coupling to achieve low-cost reliable storage HDFS, by providing a virtual data storage layer defined and implemented by software for data applications, abstracting and integrating cloudy, hybrid cloud, multi-data center and other environments The underlying files and objects, and through intelligent workload analysis and data management, make data close to computing and provide data locality, big data and machine learning applications can be achieved with the same performance and lower cost.