The official definition of Apache Hadoop given by Apache Software Foundation: ( http://hadoop.apache.org/#What+Is+Apache+Hadoop%3F)
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
I would say, Hadoop Ecosystem is neither a programming language nor a service, it is a platform or framework which solves big data problems. You can consider it as a suite which encompasses a number of services (ingesting, storing, analyzing and maintaining) inside it.
-
HDFS -> Hadoop Distributed File System
-
YARN -> Yet Another Resource Negotiator
-
MapReduce -> Data processing using programming
-
Spark -> In-memory Data Processing
-
PIG, HIVE-> Data Processing Services using Query (SQL-like)
-
HBase -> NoSQL Database
-
Mahout, Spark MLlib -> Machine Learning
-
Apache Drill -> SQL on Hadoop
-
Zookeeper -> Managing Cluster
-
Oozie -> Job Scheduling
-
Flume, Sqoop -> Data Ingesting Services
-
Solr & Lucene -> Searching & Indexing
- Ambari -> Provision, Monitor and Maintain cluster