All you need to know about Apache Hadoop & Big Data…!!!!

Aditya Jagtap
10 min readSep 17, 2020

In this post you all will get a clarity on the following aspects of Hadoop, like : History of Hadoop, what is Hadoop? what it’s architecture looks like? How the Hadoop works? What are the core component’s of Hadoop? What are its Applications ?

Later on we’ll explore the Big Data. Where we will cover the topics like : What is Big Data ?What are the 6 V’s of the Big Data? What comes under the Big Data? What are the benefits of it? What technologies used in it? Challenges comes in Big Data and Big data tools associated with Hadoop.

After covering all the points stated above we will see the names of Top 15 MNC companies, which make use of Hadoop & Big Data to store and manage the data in their companies.

Hadoop

History

Hadoop was started with Doug Cutting & Mike Cafarella in the year 2002 when they both started to work on Apache Nutch project. Apache Nutch project was the process of building a search engine system that can index 1 billion pages.

In 2003, they came across a paper that described the architecture of Google’s distributed file system, called GFS (Google File System) which was published by Google, for storing the large data sets.

In 2004, Google published one more paper on the technique Map-Reduce, which was the solution of processing those large datasets. But the GFS & Map-Reduce only were on the white paper. Later on Doug Cutting together with Mike Cafarella, he started implementing Google’s techniques (GFS & Map-Reduce) as open-source in the Apache Nutch project.

What is Apache Hadoop?

Hadoop is an Apache open source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models.

The Hadoop framework application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale up from single server to thousands of machines, each offering local computation and storage.

Architecture of the Hadoop

The following is the architecture of the Hadoop :

Hadoop has two major layers, those are :

  • Map-Reduce : Processing/Computation layer
  • Hadoop Distributed File System (HDFS) : Storage layer

Map-Reduce

Map-Reduce is a parallel programming model for writing distributed applications devised at Google for efficient processing of large amounts of data (multi-terabyte data-sets), on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. The Map-Reduce program runs on Hadoop which is an Apache open-source framework.

Fig.: Map-Reduce

Hadoop Distributed File System

The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and provides a distributed file system that is designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. It is highly fault-tolerant and is designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications having large datasets.

Apart from the above-mentioned two core components, Hadoop framework also includes the following two modules −

  • Hadoop Common − These are Java libraries and utilities required by other Hadoop modules.
  • Hadoop YARN − This is a framework for job scheduling and cluster resource management.
Fig.: Hadoop Distributed File System (HDFS)

How does the Hadoop works & it’s core concepts?

It is quite expensive to build bigger servers with heavy configurations that handle large scale processing, but as an alternative, you can tie together many commodity computers with single-CPU, as a single functional distributed system and practically, the clustered machines can read the data-set in parallel and provide a much higher throughput. It is cheaper than one high-end server. So this is the first motivational factor behind using Hadoop that it runs across clustered and low-cost machines.

Hadoop runs code across a cluster of computers. In Hadoop clusters, YARN sits between HDFS and the processing engines deployed by users. The resource manager uses a combination of containers, application coordinators and node-level monitoring agents to dynamically allocate cluster resources to applications and oversee the execution of processing jobs in a decentralized process. YARN supports multiple job scheduling approaches, including a first-in-first-out queue and several methods that schedule jobs based on assigned cluster resources.

The Hadoop 2.0 series of releases also added high availability and federation features for HDFS, support for running Hadoop clusters on Microsoft Windows servers and other capabilities designed to expand the distributed processing framework’s versatility for big data management and analytics.

Hadoop 3.0.0 was the next major version of Hadoop. Released by Apache in December 2017, it added a YARN Federation feature designed to enable YARN to support tens of thousands of nodes or more in a single cluster, up from a previous 10,000-node limit. The new version also included support for GPU’s and erasure coding, an alternative to data replication that requires significantly less storage space.

This process includes the following core tasks that Hadoop performs −

  • Data is initially divided into directories and files. Files are divided into uniform sized blocks of 128M and 64M (preferably 128M).
  • These files are then distributed across various cluster nodes for further processing.
  • HDFS, being on top of the local file system, supervises the processing.
  • Blocks are replicated for handling hardware failure.
  • Checking that the code was executed successfully.
  • Performing the sort that takes place between the map and reduce stages.
  • Sending the sorted data to a certain computer.
  • Writing the debugging logs for each job.

Core component’s of Hadoop

Fig.: Hadoop Core component’s

Advantages of Hadoop

  • Hadoop framework allows the user to quickly write and test distributed systems. It is efficient, and it automatic distributes the data and work across the machines and in turn, utilizes the underlying parallelism of the CPU cores.
  • Hadoop does not rely on hardware to provide fault-tolerance and high availability (FTHA), rather Hadoop library itself has been designed to detect and handle failures at the application layer.
  • Servers can be added or removed from the cluster dynamically and Hadoop continues to operate without interruption.
  • Another big advantage of Hadoop is that apart from being open source, it is compatible on all the platforms since it is Java based.

Big Data

Due to the advent of new technologies, devices, and communication means like social networking sites, the amount of data produced by mankind is growing rapidly every year. The amount of data produced by us from the beginning of time till 2003 was 5 billion gigabytes. If you pile up the data in the form of disks it may fill an entire football field. The same amount was created in every two days in 2011, and in every ten minutes in 2013. This rate is still growing enormously. Though all this information produced is meaningful and can be useful when processed, it is being neglected.

90% of the world’s data was generated in the last few years, which is much more to handle / store on the tradition storing devices.

What is Big Data ?

Big data is a collection of large datasets that cannot be processed using traditional computing techniques. It is not a single technique or a tool, rather it has become a complete subject, which involves various tools, techniques and frameworks.

6 V’s of the Big Data

Fig.: 6 V’s of Big Data

What comes under the Big Data ?

Big data involves the data produced by different devices and applications. Given below are some of the fields that come under the umbrella of Big Data.

  • Black Box Data − It is a component of helicopter, airplanes, and jets, etc. It captures voices of the flight crew, recordings of microphones and earphones, and the performance information of the aircraft.
  • Social Media Data − Social media such as Facebook and Twitter hold information and the views posted by millions of people across the globe.
  • Stock Exchange Data − The stock exchange data holds information about the ‘buy’ and ‘sell’ decisions made on a share of different companies made by the customers.
  • Power Grid Data − The power grid data holds information consumed by a particular node with respect to a base station.
  • Transport Data − Transport data includes model, capacity, distance and availability of a vehicle.
  • Search Engine Data − Search engines retrieve lots of data from different databases.
Fig. : BIg Data

Thus Big Data includes huge volume, high velocity, and extensible variety of data. The data in it will be of three types.

  • Structured data − Relational data.
  • Semi Structured data − XML data.
  • Unstructured data − Word, PDF, Text, Media Logs.

Benefits of Big Data

  • Using the information kept in the social network like Facebook, the marketing agencies are learning about the response for their campaigns, promotions, and other advertising mediums.
  • Using the information in the social media like preferences and product perception of their consumers, product companies and retail organizations are planning their production.
  • Using the data regarding the previous medical history of patients, hospitals are providing better and quick service.

Big Data Technologies

Big data technologies are important in providing more accurate analysis, which may lead to more concrete decision-making resulting in greater operational efficiencies, cost reductions, and reduced risks for the business.

To harness the power of big data, you would require an infrastructure that can manage and process huge volumes of structured and unstructured data in real-time and can protect data privacy and security.

There are various technologies in the market from different vendors including Amazon, IBM, Microsoft, etc., to handle big data. While looking into the technologies that handle big data, we examine the following two classes of technology −

1. Operational Big Data

This include systems like MongoDB that provide operational capabilities for real-time, interactive workloads where data is primarily captured and stored. NoSQL Big Data systems are designed to take advantage of new cloud computing architectures that have emerged over the past decade to allow massive computations to be run inexpensively and efficiently. This makes operational big data workloads much easier to manage, cheaper, and faster to implement. Some NoSQL systems can provide insights into patterns and trends based on real-time data with minimal coding and without the need for data scientists and additional infrastructure.

2. Analytical Big Data

These includes systems like Massively Parallel Processing (MPP) database systems and Map-Reduce that provide analytical capabilities for retrospective and complex analysis that may touch most or all of the data. Map-Reduce provides a new method of analyzing data that is complementary to the capabilities provided by SQL, and a system based on Map-Reduce that can be scaled up from single servers to thousands of high and low end machines. These two classes of technology are complementary and frequently deployed together.

Big Data Challenges

The major challenges associated with big data are as follows −

  • Capturing data
  • Curation
  • Storage
  • Searching
  • Sharing
  • Transfer
  • Analysis
  • Presentation

Big data tools associated with Hadoop

The ecosystem that has been built up around Hadoop includes a range of other open source technologies that can complement and extend its basic capabilities. The list of related big data tools includes these examples:

  • Apache Flume, a tool used to collect, aggregate and move large amounts of streaming data into HDFS
  • Apache HBase, a distributed database that’s often paired with Hadoop
  • Apache Hive, a SQL-on-Hadoop tool that provides data summarization, query and analysis
  • Apache Oozie, a server-based workflow scheduling system to manage Hadoop jobs
  • Apache Phoenix, a SQL-based massively parallel processing database engine that uses HBase as its data store
  • Apache Pig, a high-level platform for creating programs that run on Hadoop clusters
  • Apache Sqoop, a tool to help transfer bulk data between Hadoop and structured data stores, such as relational databases
  • Apache ZooKeeper, a configuration, synchronization and naming registry service for large distributed systems.

Top 15 MNC companies who uses Hadoop & Big data

  • Prolifics
  • Clairvoyant
  • ScienceSoft
  • Xplenty
  • IBM
  • HP Enterprise
  • Teradata
  • Oracle
  • SAP
  • EMC
  • Amazon
  • Microsoft
  • Google
  • VMware
  • Splunk

Conclusion

As we all know that, the need for the storing of data is never an ending process. Even if you store the data in traditional way i.e, in Hard Disk Drives, Soft State Drives, etc. you’ll never be able to store the data which is generated in today’s day to day life and this same issue is faced by all of the MNC companies.

This post cover’s all the basics & challenges of the today’s trending technologies Hadoop & Big Data. Also the top 15 companies use these trending technologies.

--

--

Aditya Jagtap

Love == Blogs | Tech Explorer | Former ARTH Learner | Active Job Seeker | C++ | Java