MapReduce in Hadoop

How does MapReduce relate to Hadoop?

The MapReduce programming model provides a clear abstraction between large scale data analysis tasks and the underlying systems challenges involved in ensuring reliable large-scale computation. It was designed by Google.
MapReduce provides a framework for data analysis.
By adhering to the MapReduce model, your data processing job can be easily parallelized and the programmer does not have to think about the system level details of synchronization, concurrency, hardware failure, etc., which limits the scalability of any MapReduce implementation in the underlying system storing your data and feeding it to MapReduce.

Functional techniques can help you write more declarative code that is easier to understand at a glance, refactor, and test. Most of the times it can be much easier to use the map , filter or reduce methods. The first is the map job, which takes a set of data and converts it into another set of data, where individual elements are broken down into tuples (key/value pairs). The reduce job takes the output from a map as input and combines those data tuples into a smaller set of tuples. As the sequence of the name MapReduce implies, the reduce job is always performed after the map job.