Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

101.

Kafka is comparable to traditional messaging systems such as _____________(a) Impala(b) ActiveMQ(c) BigTop(d) ZookeeperI had been asked this question at a job interview.Origin of the question is Kafka with Hadoop topic in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct CHOICE is (B) ActiveMQ

Explanation: Kafka WORKS WELL as a replacement for a more traditional message BROKER.

102.

How many types of nodes are present in Storm cluster?(a) 1(b) 2(c) 3(d) 4The question was posed to me in my homework.Question is from Storm in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» RIGHT answer is (c) 3

To explain: A storm cluster has THREE SETS of NODES.
103.

Apache Storm added the open source, stream data processing to _________ Data Platform.(a) Cloudera(b) Hortonworks(c) Local Cloudera(d) MapRThe question was asked by my school principal while I was bunking the class.My question is based upon Storm in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right answer is (b) Hortonworks

The BEST explanation: The Storm COMMUNITY is WORKING to improve capabilities related to three important themes: BUSINESS continuity, OPERATIONS and developer productivity.

104.

Storm is benchmarked as processing one million _______ byte messages per second per node.(a) 10(b) 50(c) 100(d) 200This question was addressed to me in a job interview.My enquiry is from Storm topic in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct answer is (C) 100

Explanation: STORM is a distributed real-time COMPUTATION SYSTEM.

105.

Point out the wrong statement.(a) Storm is difficult and can be used with only Java(b) Storm is fast: a benchmark clocked it at over a million tuples processed per second per node(c) Storm is scalable, fault-tolerant, guarantees your data will be processed(d) All of the mentionedThe question was posed to me during an interview.The origin of the question is Storm in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right choice is (a) STORM is DIFFICULT and can be USED with only Java

The best explanation: Storm is simple, can be used with any PROGRAMMING language.

106.

For Apache __________ users, Storm utilizes the same ODBC interface.(a) cTakes(b) Hive(c) Pig(d) OozieThe question was asked in an internship interview.My question is taken from Storm in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT answer is (b) Hive

To explain I would SAY: You don’t have to worry about re-inventing the implementation WHEEL.

107.

Storm integrates with __________ via Apache Slider.(a) Scheduler(b) YARN(c) Compaction(d) All of the mentionedThe question was posed to me during an interview.Question is from Storm in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right ANSWER is (C) Compaction

Easy explanation: Impala is open source (APACHE LICENSE), so you can self-support in perpetuity if you wish.

108.

Point out the correct statement.(a) A Storm topology consumes streams of data and processes those streams in arbitrarily complex ways(b) Apache Storm is a free and open source distributed real-time computation system(c) Storm integrates with the queueing and database technologies you already use(d) All of the mentionedI got this question in examination.I'm obligated to ask this question of Storm in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct option is (d) All of the mentioned

The EXPLANATION is: Storm has MANY use cases: real-time ANALYTICS, online machine learning, CONTINUOUS computation, distributed RPC, ETL, and more.

109.

____________ is a distributed real-time computation system for processing large volumes of high-velocity data.(a) Kafka(b) Storm(c) Lucene(d) BigTopI have been asked this question at a job interview.This intriguing question originated from Storm topic in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct answer is (b) Storm

To elaborate: Storm on YARN is POWERFUL for scenarios REQUIRING real-time ANALYTICS, machine learning and continuous MONITORING of operations.

110.

Which of the following features is not provided by Impala?(a) SQL functionality(b) ACID(c) Flexibility(d) None of the mentionedI got this question in semester exam.The origin of the question is Impala in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct choice is (b) ACID

The EXPLANATION is: Impala COMBINES all of the BENEFITS of other Hadoop frameworks, including flexibility, scalability, and cost-effectiveness, with the performance, usability, and SQL functionality NECESSARY for an enterprise-grade ANALYTIC database.

111.

____________ analytics is a work in progress with Impala.(a) Reproductive(b) Exploratory(c) Predictive(d) All of the mentionedThis question was posed to me during an interview.I need to ask this question from Impala in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT option is (a) Reproductive

Easy explanation: Impala is the de FACTO standard for OPEN source interactive BUSINESS intelligence and DATA discovery.

112.

Which of the following companies shipped Impala?(a) Amazon(b) Oracle(c) MapR(d) All of the mentionedI have been asked this question in quiz.The origin of the question is Impala topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT option is (d) All of the mentioned

The explanation is: IMPALA is shipped by CLOUDERA, MAPR, Oracle, and AMAZON.
113.

Impala is integrated with native Hadoop security and Kerberos for authentication via __________ module.(a) Sentinue(b) Sentry(c) Sentinar(d) All of the mentionedThe question was posed to me at a job interview.This interesting question is from Impala in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct option is (B) Sentry

Explanation: Via the Sentry module, you can ENSURE that the right users and APPLICATIONS are authorized for the right data.

114.

Point out the wrong statement.(a) For Apache Hive users, Impala utilizes the same metadata, ODBC driver, SQL syntax, and user interface as Hive(b) Impala provides high latency and low concurrency(c) Impala also scales linearly, even in multi tenant environments(d) All of the mentionedThe question was asked by my school principal while I was bunking the class.My query is from Impala topic in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct choice is (b) Impala PROVIDES HIGH latency and low CONCURRENCY

For EXPLANATION: Impala provides low latency and high concurrency.

115.

For Apache __________ users, Impala utilizes the same metadata.(a) cTakes(b) Hive(c) Pig(d) OozieThis question was addressed to me in examination.The origin of the question is Impala in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» RIGHT OPTION is (b) Hive

The best explanation: You don’t have to WORRY about re-inventing the implementation WHEEL.
116.

Impala is an integrated part of a ____________ enterprise data hub.(a) MicroSoft(b) IBM(c) Cloudera(d) All of the mentionedI had been asked this question in my homework.This interesting question is from Impala topic in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT choice is (C) Cloudera

Best EXPLANATION: IMPALA is open SOURCE (Apache License), so you can self-support in perpetuity if you wish.
117.

Point out the correct statement.(a) With Impala, more users, whether using SQL queries or BI applications, can interact with more data(b) Technical support for Impala is not available via a Cloudera Enterprise subscription(c) Impala is proprietary tool for Hadoop(d) None of the mentionedThis question was posed to me in examination.The doubt is from Impala in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT choice is (a) With Impala, more users, whether using SQL queries or BI applications, can interact with more data

For explanation: It is POSSIBLE through a single repository and METADATA store from source through analysis.

118.

__________ is a fully integrated, state-of-the-art analytic database architected specifically to leverage strengths of Hadoop.(a) Oozie(b) Impala(c) Lucene(d) BigTopThe question was asked in an international level competition.My enquiry is from Impala in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT ANSWER is (b) IMPALA

To elaborate: Impala provides scalability and FLEXIBILITY to hadoop.
119.

Which of the following builds an APT or YUM package repository?(a) Bigtop-trunk-packagetest(b) Bigtop-trunk-repository(c) Bigtop-VM-matrix(d) None of the mentionedI have been asked this question during an interview.I'd like to ask this question from BigTop in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right ANSWER is (B) Bigtop-trunk-repository

Easiest EXPLANATION: Bigtop-trunk-packagetest runs the package tests.

120.

The Apache Jenkins server runs the ______________ job whenever code is committed to the trunk branch.(a) “Bigtop-trunk”(b) “Bigtop”(c) “Big-trunk”(d) None of the mentionedI got this question in examination.Query is from BigTop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right choice is (a) “Bigtop-trunk”

Easiest explanation: Jenken Server in TURN runs several TEST JOBS.

121.

The Bigtop Jenkins server runs daily jobs for the _______ and trunk branches.(a) 0.1(b) 0.2(c) 0.3(d) 0.4This question was posed to me in an interview for job.Query is from BigTop topic in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct CHOICE is (C) 0.3

For EXPLANATION I would say: Each job has a configuration for each supported OPERATING system. In each branch there is a job to build each COMPONENT.

122.

Apache Bigtop uses ___________ for continuous integration testing.(a) Jenkinstop(b) Jerry(c) Jenkins(d) None of the mentionedI have been asked this question by my school teacher while I was bunking the class.This interesting question is from BigTop topic in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT CHOICE is (C) Jenkins

To elaborate: There are 2 Jenkins servers running for the project.
123.

Which of the following operating system is not supported by BigTop?(a) Fedora(b) Solaris(c) Ubuntu(d) SUSEThe question was posed to me by my school principal while I was bunking the class.Question is from BigTop in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct choice is (b) Solaris

For EXPLANATION I would SAY: Bigtop components power the leading Hadoop distros and SUPPORT many Operating SYSTEMS, including Debian/Ubuntu, CentOS, Fedora, SUSE and many OTHERS.

124.

Point out the wrong statement.(a) Bigtop-0.5.0 : Builds the 0.5.0 release(b) Bigtop-trunk-HBase builds the HCatalog packages only(c) There are also jobs for building virtual machine images(d) All of the mentionedThe question was posed to me in my homework.The origin of the question is BigTop in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» RIGHT ANSWER is (a) Bigtop-0.5.0 : Builds the 0.5.0 release

The best explanation: Bigtop PROVIDES vagrant recipes, raw images, and (work-in-progress) docker recipes for deploying HADOOP from zero.
125.

Which of the following work is done by BigTop in Hadoop framework?(a) Packaging(b) Smoke Testing(c) Virtualization(d) All of the mentionedThis question was addressed to me during an interview for a job.This question is from BigTop in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct OPTION is (d) All of the mentioned

Easiest explanation: BigTop is looking for COMPREHENSIVE packaging, testing, and configuration of the leading open source BIG data COMPONENTS.

126.

Point out the correct statement.(a) Bigtop provides an integrated smoke testing framework, alongside a suite of over 10 test files(b) Bigtop includes tools and a framework for testing at various levels(c) Bigtop components supports only one Operating Systems(d) All of the mentionedI had been asked this question by my college professor while I was bunking the class.My doubt stems from BigTop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct choice is (b) Bigtop includes tools and a framework for testing at various levels

For explanation I would say: Bigtopis used for both INITIAL deployments as well as UPGRADE SCENARIOS for the ENTIRE data platform, not just the individual components.

127.

Which of the following is project for Infrastructure Engineers and Data Scientists?(a) Impala(b) BigTop(c) Oozie(d) FlumeI had been asked this question in an interview.I need to ask this question from BigTop in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct answer is (B) BIGTOP

To EXPLAIN: Bigtop SUPPORTS a wide range of components/projects, including, but not limited to, Hadoop, HBase and Spark.

128.

Falcon provides the key services data processing applications need so Sophisticated________ can easily be added to Hadoop applications.(a) DAM(b) DLM(c) DCM(d) All of the mentionedThis question was posed to me in examination.I'm obligated to ask this question of Orchestration in Hadoop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT answer is (b) DLM

To EXPLAIN I would say: Complex DATA PROCESSING logic is handled by Falcon INSTEAD of hard-coded in apps.

129.

A recurring workflow is used for purging expired data on __________ cluster.(a) Primary(b) Secondary(c) BCP(d) None of the mentionedI have been asked this question in an internship interview.The query is from Orchestration in Hadoop topic in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct option is (a) Primary

The EXPLANATION is: FALCON PROVIDES retention workflow for each cluster based on the defined POLICY.

130.

Falcon promotes decoupling of data set location from ___________ definition.(a) Oozie(b) Impala(c) Kafka(d) ThriftThis question was posed to me in an internship interview.This key question is from Orchestration in Hadoop in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct option is (a) Oozie

Explanation: FALCON uses declarative processing with simple directives enabling RAPID PROTOTYPING.

131.

Falcon provides ___________ workflow for copying data from source to target.(a) recurring(b) investment(c) data(d) none of the mentionedThis question was posed to me in an international level competition.My query is from Orchestration in Hadoop in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» RIGHT option is (a) recurring

Best explanation: FALCON instruments workflows for DEPENDENCIES, retry logic, Table/Partition registration, NOTIFICATIONS, ETC.
132.

Point out the wrong statement.(a) Falcon promotes Javascript Programming(b) Falcon does not do any heavy lifting but delegates to tools with in the Hadoop ecosystem(c) Falcon handles retry logic and late data processing. Records audit, lineage and metrics(d) All of the mentionedThe question was posed to me in final exam.I want to ask this question from Orchestration in Hadoop topic in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right answer is (a) Falcon PROMOTES Javascript PROGRAMMING

Best EXPLANATION: Falcon promotes Polyglot Programming.

133.

The ability of Hadoop to efficiently process large volumes of data in parallel is called __________ processing.(a) batch(b) stream(c) time(d) all of the mentionedI got this question during an interview.This question is from Orchestration in Hadoop topic in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT answer is (b) stream

The best I can EXPLAIN: There are also a number of use cases that REQUIRE more “real-time” PROCESSING of data—processing the data as it ARRIVES, rather than through batch processing.

134.

__________ is used for simplified Data Management in Hadoop.(a) Falcon(b) flume(c) Impala(d) None of the mentionedThis question was posed to me in an international level competition.I'm obligated to ask this question of Orchestration in Hadoop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right choice is (a) FALCON

The EXPLANATION is: Apache Falcon process ORCHESTRATION and SCHEDULING.

135.

Point out the correct statement.(a) Large datasets are incentives for users to come to Hadoop(b) Data management is a common concern to be offered as a service(c) Understanding the life-time of a feed will allow for implicit validation of the processing rules(d) All of the mentionedThe question was posed to me in an interview.This question is from Orchestration in Hadoop in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT CHOICE is (d) All of the mentioned

Best explanation: Falcon decouples a DATA location and its properties from WORKFLOWS.

136.

A collection of various actions in a control dependency DAG is referred to as ________________(a) workflow(b) dataflow(c) clientflow(d) none of the mentionedI got this question in an interview for internship.This interesting question is from Orchestration in Hadoop topic in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right OPTION is (a) workflow

The best explanation: FALCON PROVIDES the key services for data processing apps.

137.

A _______________ action can be configured to perform file system cleanup and directory creation before starting the mapreduce job.(a) map(b) reduce(c) map-reduce(d) none of the mentionedThe question was posed to me in quiz.The origin of the question is Oozie with Hadoop topic in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» RIGHT choice is (C) map-reduce

For explanation I would say: The map-reduce action starts a HADOOP map/reduce JOB from a WORKFLOW.
138.

If the failure is of ___________ nature, Oozie will suspend the workflow job.(a) transient(b) non-transient(c) permanent(d) all of the mentionedI have been asked this question in unit test.The question is from Oozie with Hadoop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right answer is (b) non-transient

Easy EXPLANATION: If the FAILURE is an ERROR and a retry will not resolve the problem, Oozie will perform the error transition for the ACTION.

139.

If a computation/processing task -triggered by a workflow fails to complete successfully, its transitions to _____________(a) error(b) ok(c) true(d) falseI had been asked this question in semester exam.Query is from Oozie with Hadoop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT choice is (a) error

Easy explanation: If a computation/processing task -triggered by a workflow completes SUCCESSFULLY, it TRANSITIONS to ok.
140.

Point out the wrong statement.(a) The fork and join nodes must be used in pairs(b) The fork node assumes concurrent execution paths are children of the same fork node(c) A join node waits until every concurrent execution path of a previous fork node arrives to it(d) A fork node splits one path of execution into multiple concurrent paths of executionThe question was asked during an interview.The origin of the question is Oozie with Hadoop in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct option is (B) The FORK node ASSUMES CONCURRENT execution paths are children of the same fork node

The explanation: The join node assumes concurrent execution paths are children of the same fork node.

141.

The ___________ attribute in the join node is the name of the workflow join node.(a) name(b) to(c) down(d) none of the mentionedThis question was addressed to me in semester exam.Asked question is from Oozie with Hadoop topic in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT option is (a) name

The explanation is: The to attribute in the join node indicates the name of the workflow node that will executed after all concurrent execution PATHS of the corresponding FORK arrive to the join node.

142.

Which of the following can be seen as a switch-case statement?(a) fork(b) decision(c) start(d) none of the mentionedThe question was asked in an interview for job.This is a very interesting question from Oozie with Hadoop topic in chapter Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT choice is (b) DECISION

Explanation: A decision node consists of a list of predicates-transition pairs plus a default transition.
143.

All decision nodes must have a _____________ element to avoid bringing the workflow into an error state if none of the predicates evaluates to true.(a) name(b) default(c) server(d) clientThis question was addressed to me in examination.My question is based upon Oozie with Hadoop in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct OPTION is (b) DEFAULT

The explanation: The default ELEMENT indicates the TRANSITION to take if NONE of the predicates evaluates to true.

144.

Point out the correct statement.(a) Predicates are JSP Expression Language (EL) expressions(b) Predicates are evaluated in order or appearance until one of them evaluates to true and the corresponding transition is taken(c) The name attribute in the decision node is the name of the decision node(d) All of the mentionedI have been asked this question during an interview.My query is from Oozie with Hadoop in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» CORRECT answer is (d) All of the mentioned

Best explanation: The predicate ELs are EVALUATED in order until ONE returns true and the corresponding TRANSITION is TAKEN.
145.

A ___________ node enables a workflow to make a selection on the execution path to follow.(a) fork(b) decision(c) start(d) none of the mentionedI had been asked this question during an interview.I'd like to ask this question from Oozie with Hadoop in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Right choice is (b) decision

Explanation: All decision nodes must have a default element to avoid bringing the workflow into an error STATE if NONE of the PREDICATES evaluates to true.

146.

A workflow definition must have one ________ node.(a) start(b) resume(c) finish(d) none of the mentionedI got this question in exam.My question comes from Oozie with Hadoop topic in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct answer is (a) start

Explanation: The start node is the entry point for a WORKFLOW job, it indicates the FIRST workflow node the workflow job MUST transition to.

147.

Node names and transitions must be conform to the following pattern =[a-zA-Z][\-_a-zA-Z0-0]*=, of up to __________ characters long.(a) 10(b) 15(c) 20(d) 25The question was asked at a job interview.The above asked question is from Oozie with Hadoop topic in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer» RIGHT OPTION is (C) 20

The BEST I can explain: Action nodes trigger the execution of a computation/processing TASK.
148.

________ nodes that control the start and end of the workflow and workflow job execution path.(a) Action(b) Control(c) Data(d) SubDomainThis question was posed to me during an interview.I want to ask this question from Oozie with Hadoop in section Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

Correct choice is (b) Control

Explanation: Workflow NODES are CLASSIFIED in control FLOW nodes and ACTION nodes.

149.

Workflow with id __________ should be in SUCCEEDED/KILLED/FAILED.(a) wfId(b) iUD(c) iFD(d) all of the mentionedThe question was asked in unit test.I would like to ask this question from Oozie with Hadoop topic in division Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The CORRECT option is (a) wfId

The BEST I can EXPLAIN: Workflow with ID wfId should EXIST.

150.

_____________ will skip the nodes given in the config with the same exit transition as before.(a) ActionMega handler(b) Action handler(c) Data handler(d) None of the mentionedThe question was asked during a job interview.This key question is from Oozie with Hadoop in portion Oozie, Orchestration, Hadoop Libraries and Utilities and Miscellaneous Hadoop Applications of Hadoop

Answer»

The correct answer is (B) Action handler

Best explanation: Currently there is no WAY to remove an existing configuration but only override by passing a DIFFERENT VALUE in the input configuration.