Friday, November 26, 2010

Hadoop Interview questions - Part 1

Q1. Name the most common InputFormats defined in Hadoop? Which one is default ?
Following 2 are most common InputFormats defined in Hadoop
- TextInputFormat
- KeyValueInputFormat
- SequenceFileInputFormat

TextInputFormatis the hadoop default

Q2. What is the difference between TextInputFormatand KeyValueInputFormat class
TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper
KeyValueInputFormat: Reads text file and parses lines into key, val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.

Q3. What is InputSplit in Hadoop
When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split 

Q4. How is the splitting of file invoked in Hadoop Framework  
It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user 

Q5. Consider case scenario: In M/R system,
    - HDFS block size is 64 MB
    - Input format is FileInputFormat
    - We have 3 files of size 64K, 65Mb and 127Mb 
then how many input splits will be made by Hadoop framework?
Hadoop will make 5 splits as follows 
- 1 split for 64K files 
- 2  splits for 65Mb files 
- 2 splits for 127Mb file 

Q6. What is the purpose of RecordReader in Hadoop
The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat 

Q7. After the Map phase finishes, the hadoop framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?
- Partitioning
Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same

- Shuffle
After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.

- Sort
Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer 

Q9. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer  
The default partitioner computes a hash value for the key and assigns the partition based on this result 

Q10. What is a Combiner 
The Combiner is a "mini-reduce" process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

1 comment: