All the metadata of the HDFS file system that hadoop runs is managed by a server that run the Name node daemon and is called the "Name Node". If this node fail, data on your whole cluster is pretty much gone. Hadoop introduced something called secondary namenode which most people think is a hot spare but it actually is more like a checkpoint node. It takes snapshots of the metadata on Name Node.
See the video and understand this concept in detail if you are running a hadoop cluster.
http://vimeo.com/15782414
No comments:
Post a Comment