Hadoop Interview Questions and Answers Set 2

11. What is commodity hardware?

Commodity Hardware refers to inexpensive systems that do not have high availability or high quality. Commodity Hardware consists of RAM because there are specific services that need to be executed on RAM. Hadoop can be run on any commodity hardware and does not require any super computer s or high end hardware configuration to execute jobs.

12. Explain what is heartbeat in HDFS?

Heartbeat is referred to a signal used between a data node and Name node, and between task tracker and job tracker, if the Name node or job tracker does not respond to the signal, then it is considered there is some issues with data node or task tracker.

13. What happens when a datanode fails ?

When a datanode fails

Jobtracker and namenode detect the failure

On the failed node all tasks are re-scheduled

Namenode replicates the users data to another node

14. Explain what happens in textinformat ?

In textinputformat, each line in the text file is a record.  Value is the content of the line while Key is the byte offset of the line. For instance, Key: longWritable, Value: text

 

15. Explain what is sqoop in Hadoop ?

To transfer the data between Relational database management (RDBMS) and Hadoop HDFS a tool is used known as Sqoop. Using Sqoop data can be transferred from RDMS like MySQL or Oracle into HDFS as well as exporting data from HDFS file to RDBMS.

HADOOP TRAINING
Weekend / Weekday Batch

16. Mention what are the data components used by Hadoop?

Data components used by Hadoop are

Pig

Hive

17. What is rack awareness?

Rack awareness is the way in which the namenode determines on how to place blocks based on the rack definitions.

18. Explain how do ‘map’ and ‘reduce’ works.

Namenode takes the input and divide it into parts and assign them to data nodes. These datanodes process the tasks assigned to them and make a key-value pair and returns the intermediate output to the Reducer. The reducer collects this key value pairs of all the datanodes and combines them and generates the final output.

19. What is a Combiner?

The Combiner is a ‘mini-reduce’ process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

20. Consider case scenario: In M/R system, – HDFS block size is 64 MB.

– Input format is FileInputFormat

– We have 3 files of size 64K, 65Mb and 127Mb

How many input splits will be made by Hadoop framework?

Hadoop will make 5 splits as follows −

– 1 split for 64K files

– 2 splits for 65MB files

– 2 splits for 127MB files

Hadoop interview questions answers

Search Tags:

Hadoop interview questions | Hadoop interview questions for freshers | Hadoop interview questions for experienced | Hadoop interview questions pdf | Hadoop certification questions | Hadoop Training