Recently, there are two new data abstractions released dataframe and datasets in apache spark. Now, it might be difficult to understand the relevance of each one. Also, not easy to decide which one to use and which one not to.
DataFrames gives a schema view of data basically, it is an abstraction. In dataframes, view of data is organized as columns with column name and types info. In addition, we can say data in dataframe is as same as the table in relational database.
As similar as RDD, execution in dataframe too is lazy triggered. Moreover, to allow efficient processing datasets is structure as a distributed collection of data. Spark also uses catalyst optimizer along with dataframes.
In Spark, datasets are an extension of dataframes. Basically, it earns two different APIs characteristics, such as strongly typed and untyped. Datasets are by default a collection of strongly typed JVM objects, unlike dataframes. Moreover, it uses Spark’s Catalyst optimizer. For exposing expressions & data field to a query planner.
Now we will see the difference in both based on certain features:
DataFrame- In Spark 1.3 Release, dataframes are introduced.
DataSets- In Spark 1.6 Release, datasets are introduced.
DataFrame- Dataframes organizes the data in the named column. Basically, dataframes can efficiently process unstructured and structured data. Also, allows the Spark to manage schema.
DataSets- As similar as dataframes, it also efficiently processes unstructured and structured data. Also, represents data in the form of a collection of row object or JVM objects of row. Through encoders, is represented in tabular forms.
DataFrame- In dataframe data is organized into named columns. Basically, it is as same as a table in a relational database.
DataSets- As we know, it is an extension of dataframe API, which provides the functionality of type-safe, object-oriented programming interface of the RDD API. Also, performance benefits of the Catalyst query optimizer.
Compile-time type safety
DataFrame- There is a case if we try to access the column which is not on the table. Then, dataframe APIs does not support compile-time error.
DataSets- Datasets offers compile-time type safety.
Data Sources API
DataFrame- It allows data processing in different formats, for example, AVRO, CSV, JSON, and storage system HDFS, HIVE tables, MySQL.
DataSets- It also supports data from different sources.
Immutability and Interoperability
DataFrame- Once transforming into dataframe, we cannot regenerate a domain object.
DataSets- Datasets overcomes this drawback of dataframe to regenerate the RDD from dataframe. It also allows us to convert our existing RDD and dataframes into datasets.
DataFrame- By using off-heap memory for serialization, reduce the overhead.
DataSets- It allows to perform an operation on serialized data. Also, improves memory usage.
DataFrame- In dataframe, can serialize data into off-heap storage in binary format. Afterwards, it performs many transformations directly on this off-heap memory.
DataSets- In Spark, dataset API has the concept of an encoder. Basically, it handles conversion between JVM objects to tabular representation. Moreover, by using spark internal tungsten binary format it stores, tabular representation. Also, allows to perform an operation on serialized data and also improves memory usage.
DataFrame- As same as RDD, Spark evaluates dataframe lazily too.
DataSets- As similar to RDD, and Dataset it also evaluates lazily.
DataFrame- Through spark catalyst optimizer, optimization takes place in dataframe.
DataSets- For optimizing query plan, it offers the concept of dataframe catalyst optimizer.
DataFrame- Through the Hive meta store, it auto-discovers the schema. We do not need to specify the schema manually.
DataSets- Because of using spark SQL engine, it auto discovers the schema of the files.
Programming Language Support
DataFrame- In 4 languages like Java, Python, Scala, and R dataframes are available.
DataSets- Only available in Scala and Java.