30 Sep 2015

Is Hadoop A Necessity For Data Science?

With Hadoop serving as both a scalable data platform and computational engine, Data Science is now re-emerging as a centerpiece of enterprise innovation, with applied data solutions such as online product recommendation, automated fraud detection and customer sentiment analysis. This video discusses the necessity of Hadoop for Data Science. It also covers the following topics:...
Read More

With Hadoop serving as both a scalable data platform and computational engine, Data Science is now re-emerging as a centerpiece of enterprise innovation, with applied data solutions such as online product recommendation, automated fraud detection and customer sentiment analysis.

This video discusses the necessity of Hadoop for Data Science. It also covers the following topics:

  • What is Big Data & Hadoop?

  • What is a Data Product?

  • What is Data Science?

  • Why Hadoop for Data Science?

  • Is Hadoop a necessity for Data Science?

What is Big Data & Hadoop?

Big data is a popular term used to describe the exponential growth of data. Big Data can be either Structured data or Unstructured data or a combination of both. It is nothing but an assortment of such huge and complex data that it becomes very tedious to capture, store, process, retrieve and analyze it. Thanks to on-hand database management tools or traditional data processing techniques, things have become easier now.

Hadoop:

Hadoop is a programming framework that supports the processing of large data sets in a distributed computing environment. Hadoop was the first and still the best tool to handle Big Data. Technically speaking, Hadoop is an open-source software framework that supports data-intensive distributed applications. Hadoop is licensed under the Apache v2 license. Hadoop has been developed, based on a paper originally written by Google on MapReduce system and applies concepts of functional programming.

HDFS (Hadoop Distributed File System):

Apache HDFS is derived from GFS (Google File System). HDFS is from the ‘Infrastructural’ point of view in Hadoop. Though HDFS is at present a subproject of Apache Hadoop, it was formally developed as an infrastructure for the Apache Nutch web search engine project.

HDFS is a distributed and scalable file system designed for storing very large files with streaming data access patterns, running clusters on commodity hardware.

The following are some of the assumptions and Goals/Objectives behind HDFS:

  • Large data sets

  • Write once- read many Model

  • Streaming data access

  • Commodity hardware

  • Data replication and fault tolerance

  • High-throughput

  • Moving computation is better than moving data

  • File system namespace

HDFS works on these assumptions and goals in order to help the user access or process large data sets within an incredibly short period of time.

MapReduce:

It all started with Google applying the concept of functional programming to solve the problem of how to manage large amounts of data on the internet. MapReduce was created in 2004 and Yahoo stepped in to develop Hadoop in order to implement the MapReduce technique in Hadoop. The key components of MapReduce are JobTracker, TaskTrackers and JobHistoryServer.

Key to Hadoop’s power:

  • Reducing Time and Cost – Hadoop helps in dramatically reducing the Time and Cost of building large scale data products.

  • Computation is co-located with Data – Data and Computation system is co-designed to work together.

  • Affordable at Scale – Can use ‘commodity’ hardware nodes, is self-healing, excellent at batch processing of large datasets.

  • Designed for one write and multiple reads – There are no random Writes and is Optimized for minimum seek on hard drives

What is a Data product?

A data product is a set of measurements resulting from an observation that is usually stored in a single file. A software system whose core functionality depends on the application of statistical analysis and machine learning to data.

What is Data Science?

The very term ‘Data’, needless to say, refers to information or knowledge, and the term ‘science’ holds a key role here. Data science is the study of extracting knowledge from data. Signal processing, statistical learning, machine learning, computer programming etc are the many fields that come under the category of Data science.

In other words, Data Science does the following:

  • Extracting deep meaning from data
  • Building Data Products

Why Hadoop with Data Science?

Reason #1: Explore full datasets

Reason #2: Mining of larger datasets

Reason #3: Large-scale data preparation

Reason #4: Accelerate data-driven innovation

80% of data science work is data preparation

Watch the video for the demo.

Webinar presentation:

Questions asked during the webinar:

1. How do market researchers use Hadoop?

Hadoop can store purchase pattern and other behavioral patterns and identifying them. It plays a significant role when it comes to storing data at low cost and provides extensive analysis.

2. What are the various analysis tools in Hadoop?

Hive, Pig and Impala are few of the analysis tools in Hadoop.

3. What is a Schema on read in Hadoop?

Schema on read refers to an innovative data analysis strategy in new data-handling tools like Hadoop and other more involved database technologies. In schema on read, data is applied to a plan or schema as it is pulled out of a stored location, rather than as it goes in.

4. Can Hadoop be used in Linux?

Yes, it can be used. Hadoop by default is suitable for Linux.

Got a question for us? Please mention them in the comments section and we will get back to you.

Related posts:

Get Started With Data Science

Big Data & Hadoop Training

Applying Hadoop With Data Science

Introduction to HDFS & MapReduce

Continue Watching

Watch It Again

Comments
0 Comments