Saturday, December 25, 2010

How to dynamically assign reducers to a Hadoop Job in runtime

When we were working on setting our production jobs we came to a point where we needed a way to dynamically assign reducers to a Hadoop Job in runtime. Following is what we have implemented as of today.  If you are reading the blog and have a better idea to share then please leave a comment.
PROBLEM STATEMENT
We have a lot of cases where same produciton job works on different sizes of data sets, i.e. for an hour the job can process from 35GB to 400Gb data. We wanted to change the number of reducers depending on the data sets, in runtime, and also we did not want a job to hog all reducers because the grid was shared between jobs from different teams. All jobs were equally important.
SOLUTION
This is something that hadoop MR framework cannot do for you. We found that hive came up with a solution by providing a property that limits the maximum number of bytes that will be processed by one reducer hence if this property is set to 1GB and data size is 1.2 GB data then 2 reducers will be assigned to the job in runtime; if data size 10 GB data then 10 reducers will be assigned to the job in runtime.  This works great for hive because it is designed to work agnostic of the dataset. Also a big job in hive can still take all the reducers in the grid. Since we knew our data very well and also did not want a single job to take all reducers hence we decided to implement our own solution that solved our problem.
The solution is very simple. We asked all the job owners to run their jobs in the sandbox on a fixed set of data and provide us either of the following
1.       Required: Input size of the data in Mb
2.       Required: Output size of the data in Mb
3.       Required: Is the reducer calculation CPU bound or I/O bound
4.       Optional: A decimal number, the multiplier,  to fine tune the number of reducers (if not provided then 1 will be used)

OR

1.       Required: Provide fixed number of reducers (they have to be less than TotalReducersOnTheGrid/2)

For CPU bound jobs, total number of reducers were calculated as
Total Reducers = Minimum((InputSizeOfDataInMb/128Mb) * multiplier, TotalReducersOnTheGrid/2)
i.e. total number of reducers should be equal to either the Input data size divided by the HDFS block size multiplied by the multiplier or half the total number of reducers available in the grid, whichever is smaller.

For I/O bound jobs the total number of reducers were calculated using the following formula
Total Reducers = Minimum((OutputDataSize/InputDataSize) * (InputSizeOfDataInMb/128Mb) * multiplier, TotalReducersOnTheGrid/2)
The concept of multiplier was introduced to optimize the jobs when the generic formula was not enough to optimize the number of reducers for the job. We found that some jobs always required an exact number of reducers regardless of the size of data set hence we also provided the job owners a way to specify that.
This pretty much solved most of our problems.


4 comments:

  1. Valuable information thanks for sharing from Manasa

    ReplyDelete
  2. Very Impressive Hadoop tutorial. The content seems to be pretty exhaustive and excellent and will definitely help in learning Hadoop. I'm also a learner taken up Hadoop training and I think your content has cleared some concepts of mine. While browsing for Hadoop tutorials on YouTube i found this fantastic video on Hadoop. Do check it out if you are interested to know more.:-https://www.youtube.com/watch?v=1jMR4cHBwZE

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Thanks again for sharing your encouraging experience. Hadoop Training in Hyderabad

    ReplyDelete