How can we delete intermediate data while chaining multiple MapReduce

How to delete the intermediate data while chaining multiple MapReduce jobs in Hadoop ?

This can be done using:

FileSystem.delete(Path f, boolean recursive);

Here, the path is the location on HDFS of the data.
Note: Make sure that you only delete this data once no other job requires it.

You use the JobClient.runJob(). The output path of the data from the first job becomes the input path to your second job. These need to be passed in as arguments to your jobs with appropriate code to parse them and set up the parameters for the job.

I think that the above method might however be the way the now older map red API did it, but it should still work. There will be a similar method in the new mapreduce API but i’m not sure what it is.

As far as removing intermediate data after a job has finished you can do this in your code. The way I’ve done it before is using something like:

FileSystem.delete(Path f, boolean recursive);

Where the path is the location on HDFS of the data. You need to make sure that you only delete this data once no other job requires it.

Cascading jobs
Create the JobConf object “job1” for the first job and set all the parameters with “input” as inputdirectory and “temp” as output directory. Execute this job: JobClient.run(job1).
Immediately below it, create the JobConf object “job2” for the second job and set all the parameters with “temp” as inputdirectory and “output” as output directory. Execute this job: JobClient.run(job2).

Create two JobConf objects and set all the parameters in them just like (1) except that you don’t use JobClient.run.
Then create two Job objects with jobconfs as parameters: Job job1=new Job(jobconf1); Job job2=new Job(jobconf2);
Using the jobControl object, you specify the job dependencies and then run the jobs: JobControl jbcntrl=new JobControl(“jbcntrl”); jbcntrl.addJob(job1); jbcntrl.addJob(job2); job2.addDependingJob(job1); jbcntrl.run();

If you need a structure somewhat like Map+ | Reduce | Map*, you can use the ChainMapper and ChainReducer classes that come with Hadoop version 0.19 and onwards.