THE GROWTH OF data volumes in industry and research poses tremendous opportunities, as well as tremendous computational challenges. As data sizes have outpaced the capabilities of single machines, users have needed new systems to scale out computations to multiple nodes. As a result, there has been an explosion of new cluster programming models targeting diverse computing workloads. 1,4,7,10 At first, these models were relatively specialized, with new models developed for new workloads; for example, MapReduce 4 supported batch processing, but Google also developed Dremel 13 for interactive SQL queries and Pregel 11 for iterative graph algorithms. In the open source Apache Hadoop stack, systems like Storm 1 and Impala 9 are also specialized. Even in the relational database world, the trend has been to move away from "one-size-fits-all" systems. 18 Unfortunately, most big data applications need to combine many different processing types. The very nature of "big data" is that it is diverse and messy; a typical pipeline will need MapReduce-like code for data loading , SQL-like queries, and iterative machine learning. Specialized engines can thus create both complexity and inefficiency; users must stitch together disparate systems, and some applications simply cannot be expressed efficiently in any engine. In 2009, our group at the University of California, Berkeley, started the Apache Spark project to design a unified engine for distributed data processing. Spark has a programming model similar to MapReduce but extends it with a data-sharing abstraction called "Resilient Distributed Da-tasets," or RDDs. 25 Using this simple extension, Spark can capture a wide range of processing workloads that previously needed separate engines, including SQL, streaming, machine learning, and graph processing 2,26,6 (see Figure 1). These implementations use the same optimizations as specialized engines (such as column-oriented processing and incremental updates) and achieve similar performance but run as libraries over a common engine , making them easy and efficient to compose. Rather than being specific Apache Spark: A Unified Engine for Big Data Processing key insights ˽ A simple programming model can capture streaming, batch, and interactive workloads and enable new applications that combine them. ˽ Apache Spark applications range from finance to scientific data processing and combine libraries for SQL, machine learning, and graphs. ˽ In six years, Apache Spark has grown to 1,000 contributors and thousands of deployments.
translated by 谷歌翻译