What is Hadoop Mapreduce and How Does it Work

Introduction

Hadoop MapReduce is a software framework for distributed processing of large data sets on computer clusters. It is an open-source implementation of the MapReduce programming model, which was originally developed by Google. MapReduce is a programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster. The MapReduce framework takes care of scheduling tasks, monitoring them and re-executes failed tasks. It also manages the data transfer between the different nodes in the cluster.

MapReduce works by breaking down a large data set into smaller chunks, which are then processed in parallel by multiple nodes in the cluster. Each node processes a subset of the data and produces a partial result. The partial results are then combined to produce the final result. This process is known as “shuffling”.

The MapReduce framework is designed to scale up from single servers to thousands of machines, each offering local computation and storage. It is highly fault-tolerant and can detect and handle failures at the application layer. The framework also provides a way to manage all the computing resources in the cluster.

What is Hadoop Mapreduce and How Does it Work

[ad_1]

Introduction

MapReduce is a processing module in the Apache Hadoop project. Hadoop is a platform built to tackle big data using a network of computers to store and process data.

What is so attractive about Hadoop is that affordable dedicated servers are enough to run a cluster. You can use low-cost consumer hardware to handle your data.

Hadoop is highly scalable. You can start with as low as one machine, and then expand your cluster to an infinite number of servers. The two major default components of this software library are:

  • MapReduce
  • HDFS – Hadoop distributed file system

In this article, we will talk about the first of the two modules. You will learn what MapReduce is, how it works, and the basic Hadoop MapReduce terminology.

What is Hadoop MapReduce

What is Hadoop MapReduce?

Hadoop MapReduce’s programming model facilitates the processing of big data stored on HDFS.

By using the resources of multiple interconnected machines, MapReduce effectively handles a large amount of structured and unstructured data.

MapReduce HDFS diagram with input and output.

Before Spark and other modern frameworks, this platform was the only player in the field of distributed big data processing.

MapReduce assigns fragments of data across the nodes in a Hadoop cluster. The goal is to split a dataset into chunks and use an algorithm to process those chunks at the same time. The parallel processing on multiple machines greatly increases the speed of handling even petabytes of data.

Distributed Data Processing Apps

This framework allows for the writing of applications for distributed data processing. Usually, Java is what most programmers use since Hadoop is based on Java.

However, you can write MapReduce apps in other languages, such as Ruby or Python. No matter what language a developer may use, there is no need to worry about the hardware that the Hadoop cluster runs on.

Scalability

Hadoop infrastructure can employ enterprise-grade servers, as well as commodity hardware. MapReduce creators had scalability in mind. There is no need to rewrite an application if you add more machines. Simply change the cluster setup, and MapReduce continues working with no disruptions.

What makes MapReduce so efficient is that it runs on the same nodes as HDFS. The scheduler assigns tasks to nodes where the data already resides. Operating in this manner increases available throughput in a cluster.

How MapReduce Works

At a high level, MapReduce breaks input data into fragments and distributes them across different machines.

The input fragments consist of key-value pairs. Parallel map tasks process the chunked data on machines in a cluster. The mapping output then serves as input for the reduce stage. The reduce task combines the result into a particular key-value pair output and writes the data to HDFS.

The Hadoop Distributed File System usually runs on the same set of machines as the MapReduce software. When the framework executes a job on the nodes that also store the data, the time to complete the tasks is reduced significantly.

Basic Terminology of Hadoop MapReduce

As we mentioned above, MapReduce is a processing layer in a Hadoop environment. MapReduce works on tasks related to a job. The idea is to tackle one large request by slicing it into smaller units.

JobTracker and TaskTracker

In the early days of Hadoop (version 1), JobTracker and TaskTracker daemons ran operations in MapReduce. At the time, a Hadoop cluster could only support MapReduce applications.

A JobTracker controlled the distribution of application requests to the compute resources in a cluster. Since it monitored the execution and the status of MapReduce, it resided on a master node.

A TaskTracker processed the requests that came from the JobTracker. All task trackers were distributed across the slave nodes in a Hadoop cluster.

JobTracker and TaskTracker diagram in Hadoop 1 MapReduce

YARN

Later in Hadoop version 2 and above, YARN became the main resource and scheduling manager. Hence the name Yet Another Resource Manager. Yarn also worked with other frameworks for the distributed processing in a Hadoop cluster.

MapReduce Job

A MapReduce job is the top unit of work in the MapReduce process. It is an assignment that Map and Reduce processes need to complete. A job is divided into smaller tasks over a cluster of machines for faster execution.

The tasks should be big enough to justify the task handling time. If you divide a job into unusually small segments, the total time to prepare the splits and create tasks may outweigh the time needed to produce the actual job output.

MapReduce Task

MapReduce jobs have two types of tasks.

A Map Task is a single instance of a MapReduce app. These tasks determine which records to process from a data block. The input data is split and analyzed, in parallel, on the assigned compute resources in a Hadoop cluster. This step of a MapReduce job prepares the <key, value> pair output for the reduce step.

A Reduce Task processes an output of a map task. Similar to the map stage, all reduce tasks occur at the same time, and they work independently. The data is aggregated and combined to deliver the desired output. The final result is a reduced set of <key, value> pairs which MapReduce, by default, stores in HDFS.

MapReduce job diagram with map tasks and reduce tasks.

Note: The Reduce stage is not always necessary. Some MapReduce jobs do not require the combining of data from the map task outputs. These MapReduce Applications are called map-only jobs.

The Map and Reduce stages have two parts each.

The Map part first deals with the splitting of the input data that gets assigned to individual map tasks. Then, the mapping function creates the output in the form of intermediate key-value pairs.

The Reduce stage has a shuffle and a reduce step. Shuffling takes the map output and creates a list of related key-value-list pairs. Then, reducing aggregates the results of the shuffling to produce the final output that the MapReduce application requested.

How Hadoop Map and Reduce Work Together

As the name suggests, MapReduce works by processing input data in two stages – Map and Reduce. To demonstrate this, we will use a simple example with counting the number of occurrences of words in each document.

The final output we are looking for is: How many times the words Apache, Hadoop, Class, and Track appear in total in all documents.

For illustration purposes, the example environment consists of three nodes. The input contains six documents distributed across the cluster. We will keep it simple here, but in real circumstances, there is no limit. You can have thousands of servers and billions of documents.

MapReduce example diagram when processing data.

1. First, in the map stage, the input data (the six documents) is split and distributed across the cluster (the three servers). In this case, each map task works on a split containing two documents. During mapping, there is no communication between the nodes. They perform independently.

2. Then, map tasks create a <key, value> pair for every word. These pairs show how many times a word occurs. A word is a key, and a value is its count. For example, one document contains three of four words we are looking for: Apache 7 times, Class 8 times, and Track 6 times. The key-value pairs in one map task output look like this:

  • <apache, 7>
  • <class, 8>
  • <track, 6>

This process is done in parallel tasks on all nodes for all documents and gives a unique output.

3. After input splitting and mapping completes, the outputs of every map task are shuffled. This is the first step of the Reduce stage. Since we are looking for the frequency of occurrence for four words, there are four parallel Reduce tasks. The reduce tasks can run on the same nodes as the map tasks, or they can run on any other node.

The shuffle step ensures the keys Apache, Hadoop, Class, and Track are sorted for the reduce step. This process groups the values by keys in the form of <key, value-list> pairs.

4. In the reduce step of the Reduce stage, each of the four tasks process a <key, value-list> to provide a final key-value pair. The reduce tasks also happen at the same time and work independently.

In our example from the diagram, the reduce tasks get the following individual results:

  • <apache, 22>
  • <hadoop, 20>
  • <class, 18>
  • <track, 22>

Note: The MapReduce process is not necessarily successive. The Reduce stage does not have to wait for all map tasks to complete. Once a map output is available, a reduce task can begin.

5. Finally, the data in the Reduce stage is grouped into one output. MapReduce now shows us how many times the words Apache, Hadoop, Class, and track appeared in all documents. The aggregate data is, by default, stored in the HDFS.

The example we used here is a basic one. MapReduce performs much more complicated tasks.

Some of the use cases include:

  • Turning Apache logs into tab-separated values (TSV).
  • Determining the number of unique IP addresses in weblog data.
  • Performing complex statistical modeling and analysis.
  • Running machine-learning algorithms using different frameworks, such as Mahout.

How Hadoop Partitions Map Input Data

The partitioner is responsible for processing the map output. Once MapReduce splits the data into chunks and assigns them to map tasks, the framework partitions the key-value data. This process takes place before the final mapper task output is produced.

MapReduce partitions and sorts the output based on the key. Here, all values for individual keys are grouped, and the partitioner creates a list containing the values associated with each key. By sending all values of a single key to the same reducer, the partitioner ensures equal distribution of map output to the reducer.

Note: The number of map output files depends on the number of different partitioning keys and the configured number of reducers. That amount of reducers is defined in the reducer configuration file.

The default partitioner well-configured for many use cases, but you can reconfigure how MapReduce partitions data.

If you happen to use a custom partitioner, make sure that the size of the data prepared for every reducer is roughly the same. When you partition data unevenly, one reduce task can take much longer to complete. This would slow down the whole MapReduce job.

Conclusion

The challenge with handling big data was that traditional tools were not ready to deal with the volume and complexity of the input data. That is where Hadoop MapReduce came into play.

The benefits of using MapReduce include parallel computing, error handling, fault-tolerance, logging, and reporting. This article provided the starting point in understanding how MapReduce works and its basic concepts.

[ad_2]

What is Hadoop MapReduce and How Does it Work?

Hadoop MapReduce is an open-source software framework for distributed computing that enables large-scale data processing across clusters of computers. It is a part of the Apache Hadoop project, which is a distributed computing platform that is designed to store and process large amounts of data. MapReduce is a programming model that is used to process and generate large data sets with a parallel, distributed algorithm on a cluster.

MapReduce works by breaking down a large data set into smaller chunks, which are then processed in parallel by multiple computers. Each computer processes a subset of the data, and the results are then combined to form the final output. This process is known as “mapping” and “reducing”. The mapping phase takes the input data and divides it into smaller chunks, while the reducing phase combines the results from the mapping phase to generate the final output.

MapReduce is a powerful tool for processing large amounts of data in a distributed environment. It is used in many different applications, such as web indexing, data mining, log file analysis, machine learning, and scientific simulations. It is also used in many big data applications, such as Apache Hadoop, Apache Spark, and Apache Flink.

MapReduce is a powerful tool for processing large amounts of data in a distributed environment. It is used in many different applications, such as web indexing, data mining, log file analysis, machine learning, and scientific simulations. It is also used in many big data applications, such as Apache Hadoop, Apache Spark, and Apache Flink.

In summary, Hadoop MapReduce is an open-source software framework for distributed computing that enables large-scale data processing across clusters of computers. It is a part of the Apache Hadoop project, which is a distributed computing platform that is designed to store and process large amounts of data. MapReduce works by breaking down a large data set into smaller chunks, which are then processed in parallel by multiple computers. The mapping phase takes the input data and divides it into smaller chunks, while the reducing phase combines the results from the mapping phase to generate the final output.

Jaspreet Singh Ghuman

Jaspreet Singh Ghuman

Jassweb.com/

Passionate Professional Blogger, Freelancer, WordPress Enthusiast, Digital Marketer, Web Developer, Server Operator, Networking Expert. Empowering online presence with diverse skills.

jassweb logo

Jassweb always keeps its services up-to-date with the latest trends in the market, providing its customers all over the world with high-end and easily extensible internet, intranet, and extranet products.

GSTIN is 03EGRPS4248R1ZD.

Contact
Jassweb, Rai Chak, Punjab, India. 143518
Item added to cart.
0 items - 0.00