InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
Write down some common differences between Hadoop and Teradata? |
|
Answer» WRITE down some COMMON DIFFERENCES between HADOOP and TERADATA? Below are the some of the common differences between Hadoop and Teradata:- |
|
| 2. |
Hadoop different Commands used in HDFS |
|
Answer» Hadoop different Commands used in HDFS |
|
| 3. |
Define Port Numbers for NameNode, TaskTracker and JobTracker? |
|
Answer» Define Port Numbers for NAMENODE, TASKTRACKER and JOBTRACKER? |
|
| 4. |
Most important InputSplits made by Hadoop Framework? |
|
Answer» Most important InputSplits made by HADOOP Framework? |
|
| 5. |
If we store too many small files in a cluster on HDFS what happen? |
|
Answer» If we store too MANY small files in a CLUSTER on HDFS what happen? |
|
| 6. |
Seven main difference between Hadoop and Teradata? |
|
Answer» Seven main DIFFERENCE between Hadoop and TERADATA? |
|
| 7. |
Command to find the status of blocks and FileSystem health? |
|
Answer» Command to find the status of blocks and FILESYSTEM HEALTH? |
|
| 8. |
Name the two types of metadata that a NameNode server holds? |
|
Answer» NAME the TWO TYPES of METADATA that a NameNode server holds? Below are the two types of metadata that a NameNode server holds are (1)Metadata in Disk:-Here this type of metadata CONTAINS the edit log and the FSImage. (2)Metadata in RAM:-Here this type of metadata contains the information about DataNodes. |
|
| 9. |
In Hadoop why we use Context Object? |
|
Answer» In Hadoop why we use Context Object? |
|
| 10. |
Define different data types in Pig Latin? |
|
Answer» Define different data types in Pig Latin? |
|
| 11. |
Name and syntax for core methods of Reducer? |
|
Answer» Name and syntax for core methods of Reducer? |
|
| 12. |
What do you mean by FSCK in Hadoop? |
|
Answer» What do you mean by FSCK in Hadoop? |
|
| 13. |
Which command will help us to find the status of blocks and FileSystem health? |
|
Answer» Which COMMAND will help US to find the status of blocks and FileSystem health? |
|
| 14. |
To create Mapper and Reducer which interface need to implement in Hadoop? |
|
Answer» To CREATE MAPPER and REDUCER which interface need to implement in HADOOP? |
|
| 15. |
In which language Hadoop is written and Harware best for Hadoop |
|
Answer» In which LANGUAGE Hadoop is written and Harware best for Hadoop |
|
| 16. |
Syntax to restart NameNode and for all other daemons in Hadoop? |
|
Answer» SYNTAX to restart NameNode and for all other DAEMONS in HADOOP? Below syntax is used restart NameNode and other daemons in Hadoop:- (1)To stop the NameNode we use below CODE ./sbin /Hadoop-daemon.sh stop NameNode To start NameNode command ./sbin/Hadoop-daemon.sh start NameNode (2)To stop all the daemons we use below code ./sbin /stop-all.sh To start the daemons using below code ./sbin/start-all.sh |
|
| 17. |
Explain different modes in which Hadoop run |
|
Answer» Explain different modes in which Hadoop run |
|
| 18. |
Name the top Commercial Hadoop Vendors? |
|
Answer» Name the top Commercial Hadoop Vendors? |
|
| 19. |
Define different Hadoop configuration files? |
|
Answer» Define different Hadoop configuration FILES? |
|
| 20. |
Difference between NFS and HDFS in BigData |
|
Answer» Difference between NFS and HDFS in BigData |
|
| 21. |
Answer the Port Numbers for NameNode, Task Tracker and Job Tracker |
|
Answer» ANSWER the PORT Numbers for NAMENODE, Task Tracker and Job Tracker Below are the Port Numbers for NameNode, TaskTracker and JOBTRACKER (1)NameNode Port 50070 (2)Task Tracker Port 50060 (3)Job Tracker Port 50030 |
|
| 22. |
List all daemons required to run Hadoop cluster |
|
Answer» Below are the 4 daemons required to RUN Hadoop cluster |
|
| 23. |
Name the important components of Hadoop |
|
Answer» Name the important components of Hadoop |
|
| 24. |
What are the most common input formats in Hadoop |
|
Answer» What are the most common input formats in HADOOP |
|
| 25. |
Typical block size of HDFS block |
|
Answer» TYPICAL block size of HDFS block It is 64 MB but we can EXTEND to CUSTOM DEFINED 128 MB |
|
| 26. |
Do you have any idea about the Real Time Industry applications for Hadoop? |
|
Answer» Do you have any idea about the Real Time Industry applications for Hadoop? |
|
| 27. |
What are the eight main difference between Hadoop and Spark |
|
Answer» What are the eight main DIFFERENCE between HADOOP and SPARK |
|
| 28. |
From which minimum Java Version required to run Hadoop? |
|
Answer» From which minimum Java Version REQUIRED to run Hadoop? |
|
| 29. |
What happens when two clients access the same file in the HDFS? |
|
Answer» What happens when two clients access the same file in the HDFS? |
|
| 30. |
Which of the following writable can be used to know the value from a mapper/reducer? |
|
Answer» Which of the following writable can be used to know the VALUE from a mapper reducer? |
|
| 31. |
Name the most famous companies that use Hadoop |
|
Answer» Name the most famous companies that use Hadoop |
|
| 32. |
Who will initiate the mapper? |
|
Answer» Who will initiate the mapper? |
|
| 33. |
Different commands for starting and shutting down Hadoop Daemons. |
|
Answer» Different COMMANDS for starting and shutting down HADOOP DAEMONS. |
|
| 34. |
Which of the following are true for Hadoop Pseudo Distributed Mode? |
|
Answer» Which of the following are true for Hadoop PSEUDO DISTRIBUTED Mode? |
|
| 35. |
HDFS fault tolerant information? |
|
Answer» HDFS fault TOLERANT information? |
|
| 36. |
What is the use of JPS command in Hadoop? |
|
Answer» What is the use of JPS command in HADOOP? |
|