Comparision between Apache Spark RDD vs DataFrame

At a rapid pace, Apache Spark is evolving either on the basis of changes or on the basis of additions to core APIs. The most disruptive areas of change we have seen are a representation of data sets.

In this blog, we will discuss the comparison between two of the datasets, Spark RDD vs DataFrame and learn detailed feature wise difference between RDD and dataframe in Spark.

We will also cover the brief introduction of two of the Spark APIs i.e. DataFrame vs spark RDD. There are various features on which RDD and DataFrame are different. Such as data representation, immutability, interoperability and many more.

To understand better, we will illustrate where to use Spark RDD vs DataFrame.

Introduction of Spark APIs: DataFrame and RDD

To understand the comparison well, it is important to know their introduction first, let’s study each one by one:

1. Spark RDD

Apache Spark rotates around the idea of RDD, it refers to Resilient Distributed Datasets. RDD is a fault-tolerant collection of elements that can be operated on in-parallel, also we can say RDD is the fundamental data structure of Spark.

Basically, it is read-only partition collection of records. Moreover it supports in-memory computations on large clusters in a fault-tolerant manner.

This set of data is spread across multiple machines over cluster, with API to let us act on it. From any data source, e.g. text files, a database via JDBC, etc. , an  RDD can come. Also, can easily handle data with no predefined structure.

2. DataFrame

It is a distributed collection of data. Basically, data is organized into named columns in dataframes. Although it is as same as the table in a relational database or an R/Python dataframe. Furthermore, Spark also introduced catalyst optimizer, along with dataframe.

To build an extensible query optimizer, it also leverages advanced programming features. In Spark, dataframe allows developers to impose a structure onto a distributed data. It also allows higher-level abstraction.

Comparison between Spark RDD vs DataFrame

To understand the Apache Spark RDD vs DataFrame in depth, we will compare them on the basis of different features, let’s discuss it one by one:

1. Release of DataSets

RDD – Basically, Spark 1.0 release introduced an RDD API.         

DataFrame-  Basically, Spark 1.3 release introduced a preview of the new dataset, that is dataFrame.

2. Data Formats

RDD- Through RDD, we can process structured as well as unstructured data. But, in RDD user need to specify the schema of ingested data, RDD cannot infer its own.

DataFrame- In data frame data is organized into named columns. Through dataframe, we can process structured and unstructured data efficiently. It also allows Spark to manage schema.

3. Data Representations

RDD- It is a distributed collection of data elements. That is spread across many machines over the cluster, they are a set of Scala or Java objects representing data.

DataFrame-  As we discussed above, in a data frame data is organized into named columns. Basically, it is as same as a table in a relational database.

4. Compile- Time Type Safety

RDD-  RDD Supports object-oriented programming style with compile-time type safety.

DataFrame- If we try to access any column which is not present in the table, then an attribute error may occur at runtime. Dataframe will not support compile-time type safety in such case.

5. Immutability and Interoperability

RDD- RDDs are immutable in nature. That means we can not change anything about RDDs. We can create it through some transformation on existing partitions. Due to immutability, all the computations performed are consistent in nature. If RDD is in tabular format, we can move from RDD to dataframe by to() method. We can also do the reverse by the .rdd method.

DataFrame-  One cannot regenerate a domain object, after transforming into dataframe. By using the example, if we generate one test data frame from tested then, we can not recover the original RDD again of the test class.

6. Data Sources API

RDD- From any data source, e.g. text files, a database via JDBC, etc. , an  RDD can come. Also, can easily handle data with no predefined structure.

DataFrame- In different formats, data source API allows data processing, such as AVRO, CSV, JSON, and storage system HDFS, HIVE tables, MySQL.

7. Optimization

RDD-  There was no provision for optimization engine in RDD. On the basis of its attributes, developers optimise each RDD.

DataFrame- By using Catalyst Optimizer, optimization takes place in dataframes. In 4 phases, dataframes use catalyst tree transformation framework

  • By Analysis
  • With logical plan optimization
  • By physical planning
  • With code generation to compile parts of the query to java bytecode.

8. Serialization

RDD-  Spark uses java serialization, whenever it needs to distribute data over a cluster. Serializing individual Scala and Java objects are expensive. It also requires sending both data and structure between nodes.

DataFrame- In dataframe, we can serialize data into off-heap storage in binary format. Afterwards, it performs transformations on this off-heap memory, as spark understands schema. Moreover, to encode the data, there is no need to use java serialization.

9. Efficiency/Memory use

RDD-  When serialization executes individually on a java and scala object, efficiency decreases. It also takes lots of time.

DataFrame- Use of off-heap memory for serialization reduces the overhead also generates, bytecode. So that, many operations can be performed on that serialized data. Basically, there is no need of deserialization for small operations.

10. Lazy Evaluation

RDD-  Spark does not compute their result right away, it evaluates RDDs lazily. Apart from it, Spark memorizes the transformation applied to some base data set. Moreover, When an action needs, a result sent to driver program for computation.

DataFrame- Similarly, computation happens only when action appears as Spark evaluates dataframe lazily.

11. Language Support

RDD- APIs for RDD is available in 4 languages, such as Java, Scala, Python, and R. As a result, this feature provides flexibility to the developers.

DataFrame- As similar as RDD, it also has APIs in same 4 languages, such as Java, Scala, Python, and R.

12. Schema Projection

RDD- Since RDD APIs, use schema projection explicitly. Therefore, a user needs to define the schema manually.

DataFrame- In dataframe, there is no need to specify a schema. Generally, it discovers schema automatically.

13. Aggregation

RDD- While performing simple grouping and aggregation operations RDD API is slower.

DataFrame- In performing exploratory analysis, creating aggregated statistics on data, dataframes are faster.

14. Usage

RDD-  When you want low-level transformation and actions, we use RDDs. Also, when we need high-level abstractions we use RDDs.

DataFrame-  We use dataframe when we need a high level of abstraction and for unstructured data, such as media streams or streams of text.

Conclusion

As a result, we have seen RDDs of Apache spark offers low-level functionality and control. Whereas datasets offer higher functionality. While dataframe offers high-level domain-specific operations, saves space and executes at high speed.

Therefore, it increases the efficiency of the system. Ultimately, we have discussed the comparison between Spark RDD vs DataFrame in detail.