Strange Loop

2009 - 2023

/

St. Louis, MO

Flare: Optimizing Apache Spark for Scale-Up Architectures and Medium-Size Data

In recent years, Apache Spark has become the de facto standard for big data processing. Spark has enabled a wide audience of users to process petabyte-scale workloads due to its flexibility and ease of use: users are able to mix SQL-style relational queries with Scala or Python code, and have the resultant programs distributed across an entire cluster, all without having to work with low-level parallelization or network primitives.

However, many workloads of practical importance are not large enough to justify distributed, scale-out execution, as the data may reside entirely in main memory of a single powerful server. Still, users want to use Spark for its familiar interface and tooling. In such scale-up scenarios, Spark's performance is suboptimal, as Spark prioritizes handling data size over optimizing the computations on that data. For such medium-size workloads, performance may still be of critical importance if jobs are computationally heavy, need to be run frequently on changing data, or interface with external libraries and systems (e.g., TensorFlow for machine learning).

We present Flare, an accelerator module for Spark that delivers order of magnitude speedups on scale-up architectures for a large class of applications. Inspired by query compilation techniques from main-memory database systems, Flare incorporates a code generation strategy designed to match the unique aspects of Spark and the characteristics of scale-up architectures, in particular proc

Gregory Essertel

Gregory Essertel

Purdue University

Gregory is a senior PhD student at Purdue University, working with Tiark Rompf. His research focuses on compiler techniques for big data and AI systems, with a particular focus on accelerating Spark and TensorFlow. His work received a Distinguished Artifact Award at OOPSLA '16.