Saturday, December 25, 2010

How to dynamically assign reducers to a Hadoop Job in runtime

When we were working on setting our production jobs we came to a point where we needed a way to dynamically assign reducers to a Hadoop Job in runtime. Following is what we have implemented as of today.  If you are reading the blog and have a better idea to share then please leave a comment.
We have a lot of cases where same produciton job works on different sizes of data sets, i.e. for an hour the job can process from 35GB to 400Gb data. We wanted to change the number of reducers depending on the data sets, in runtime, and also we did not want a job to hog all reducers because the grid was shared between jobs from different teams. All jobs were equally important.
This is something that hadoop MR framework cannot do for you. We found that hive came up with a solution by providing a property that limits the maximum number of bytes that will be processed by one reducer hence if this property is set to 1GB and data size is 1.2 GB data then 2 reducers will be assigned to the job in runtime; if data size 10 GB data then 10 reducers will be assigned to the job in runtime.  This works great for hive because it is designed to work agnostic of the dataset. Also a big job in hive can still take all the reducers in the grid. Since we knew our data very well and also did not want a single job to take all reducers hence we decided to implement our own solution that solved our problem.
The solution is very simple. We asked all the job owners to run their jobs in the sandbox on a fixed set of data and provide us either of the following
1.       Required: Input size of the data in Mb
2.       Required: Output size of the data in Mb
3.       Required: Is the reducer calculation CPU bound or I/O bound
4.       Optional: A decimal number, the multiplier,  to fine tune the number of reducers (if not provided then 1 will be used)


1.       Required: Provide fixed number of reducers (they have to be less than TotalReducersOnTheGrid/2)

For CPU bound jobs, total number of reducers were calculated as
Total Reducers = Minimum((InputSizeOfDataInMb/128Mb) * multiplier, TotalReducersOnTheGrid/2)
i.e. total number of reducers should be equal to either the Input data size divided by the HDFS block size multiplied by the multiplier or half the total number of reducers available in the grid, whichever is smaller.

For I/O bound jobs the total number of reducers were calculated using the following formula
Total Reducers = Minimum((OutputDataSize/InputDataSize) * (InputSizeOfDataInMb/128Mb) * multiplier, TotalReducersOnTheGrid/2)
The concept of multiplier was introduced to optimize the jobs when the generic formula was not enough to optimize the number of reducers for the job. We found that some jobs always required an exact number of reducers regardless of the size of data set hence we also provided the job owners a way to specify that.
This pretty much solved most of our problems.


  1. Valuable information thanks for sharing from Manasa

  2. Very Impressive Hadoop tutorial. The content seems to be pretty exhaustive and excellent and will definitely help in learning Hadoop. I'm also a learner taken up Hadoop training and I think your content has cleared some concepts of mine. While browsing for Hadoop tutorials on YouTube i found this fantastic video on Hadoop. Do check it out if you are interested to know more.:-

  3. Interested to know the top 10 technologies of 2019? Watch this:

  4. I appreciate your work on Big Data Hadoop. It's such a wonderful read on Big Data Hadoop .Keep sharing stuffs like this. I am also educating people on similar Big Data Hadoop Tutorial so if you are interested to know more you can watch this Big Data Hadoop Tutorial:-