|
Answer» HADOOP assists in not only store data but also processing big data. It is the most RELIABLE way to handle significant data hurdles. Some salient features of Hadoop are – - Distributed Processing – Hadoop helps in distributed processing of data, i.e., QUICKER processing. In Hadoop HDFS, the data is collected in a distributed manner, and the data is parallel processing, and MapReduce is liable for the same.
- Open Source – Hadoop is independent of cost as it is an open-source framework. Changes are allowed in the source code as per the user’s requirements.
- Fault Tolerance – Hadoop is highly fault-tolerant. By default, for every block, it creates three replicas at distinct nodes. This number of replicas can be MODIFIED according to the requirement. So, we can retrieve the data from a different node if one of the nodes fails. The discovery of node failure and restoration of data is made automatically.
- Scalability – It is fitted with different hardware, and we can promptly access the new device.
- Reliability – The data in Hadoop is stored on the cluster in a safe manner that is AUTONOMOUS of the machine. So, the data stored in the Hadoop ecosystem’s data does not get affected by any machine breakdowns.
|