Published on Feb 22,2017
3.1K Views
Email Post

The bigger the data the tougher it is to manage it. Billions of people around the world are everyday contributing to growth of data. The journey of megabytes to terabytes, and now to petabytes and exabytes also reveals the growing challenge of storage, processing, and analysis of large and complex data sets.

What is Big Data and what Challenges are Associated with it?

Big Data is a broad term for a collection of large and complex data sets. The springing Big Data has given rise to the challenge of Big Data Management. One of the challenges associated with Big Data is of storage. The size of data is now so big that you can’t store it in a single machine. The problem can certainly be solved by storing data in multiple machines. But not limited to storage, it is the crunching of data which also causes major problems. Apart from volume, the velocity of data is also a major issue. Jet Airlines collects 1 TB of data every 30 minutes, which leads to huge amounts of data accumulated in a month or a year. Moreover, variety of data is also an equally challenging aspect of Big Data. The data can be structured, semi-structured, or completely unstructured, like pre-formatted text, audio files, video files, sequence files, etc.

In brief, there are 3 Vs associated with Big Data:

  1. Volume
  2. Velocity
  3. Variety

The growing need of managing big data, including capturing, storing, searching, sharing, transferring, analyzing, and visualizing has made it even more difficult to process it, using on-hand database management tools and traditional data processing applications.

What is Big Data Analytics?

The purpose of collecting huge amounts of data is to perform analytics. There are two types of analytics:

1. Batch Analytics

Batch Analytics involve reports that run at a specific frequency, which could be once in a month, every day, every week or every hour. Hadoop is the best answer for batch analytics requirements.

2. Real-Time Analytics

Real-time analytics is quite challenging, at the same time a bread-earner for many organizations that are working on real-time analytics. For example, a bank needs to keep track of the transactions taking place every second and reflect the same in the respective customers’ accounts.

What is Spark and what difference can it make?

Apache Spark is an open-source Big Data processing and advanced analytics engine. It is a general-purpose cluster in-memory computing system. Following are its key features that make it a trump of all Hadoop frameworks.

Hadoop Swiss Army Knife: Also known as Hadoop Swiss Army Knife, Apache Spark is one-of-its-kind cluster computing framework when it comes to speed. Spark has polygot framework and allows developers to write applications in Scala, Python and Java. Scala is the preferred language for Spark, as it’s easy and can integrate Java also. It has an inbuilt compilation of 80 high-level operators.

High-performance Data Analytics: According to Michael Greene, Vice President and General Manager of System Technologies and Optimization at Intel, Apache Spark delivers high-end, real-time big data analytics solutions to the IT industry, meeting the rising customer demand.

Incredible Features: Spark has separate libraries designed for different functions, ‘Mlib’ for machine learning, ‘Spark Streaming’ for streaming data processing, and ‘GraphX’ for graphical computations. Also, it is featured with Spark SQL, which handles the SQL queries. The Spark framework can be deployed through Apache Mesos, Apache Hadoop via Yarn in HDFS, HBase, Cassandra, or Spark cluster manager, which is its own cluster manager.

Other Advantages:

  • In spark SQL, all the Hive queries can be run without any modification.
  • Spark is enabled with Shark, which is a combination of Hive and Spark. It’s a fully, Apache Hive compatible data warehousing system that can run 100x faster than Hive.
  • The programs developed over Spark run 100 times faster than those developed in Hadoop MapReduce.
  • Powerful caching and disk persistence capabilities
  • Interactive Data Analysis with REPL (Read, Evaluate, Process, and Load)
  • Faster Batch Analysis
  • Iterative Algorithms
  • Real-time stream processing
  • Faster decision-making
  • Provides great flexibility
  • It has its own cluster manager, i.e. Spark Cluster Manager.

Got a question for us? Mention them in the comments section and we will get back to you.

Related Posts:

Big Data Processing with Scala and Spark

Apache Spark and Scala Training

About Author
edureka
Published on Feb 22,2017

Share on

Browse Categories

Comments
0 Comments