Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

_________ can be configured per table for non-QUORUM consistency levels.(a) Read repair(b) Read damage(c) Write repair(d) None of the mentionedThe question was posed to me in examination.This intriguing question originated from Cassandra with Hadoop in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct answer is (a) READ repair

The best I can explain: If the REPLICAS are inconsistent, the coordinator issues writes to the out-of-date replicas to update the row to the most recent values. This PROCESS is KNOWN as read repair.

2.

There are _________ types of read requests that a coordinator can send to a replica.(a) two(b) three(c) four(d) all of the mentionedThe question was asked in quiz.I would like to ask this question from Cassandra with Hadoop in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right choice is (B) three

The EXPLANATION is: The coordinator node CONTACTS one replica node with a direct read REQUEST.

3.

The type of __________ strategy Cassandra performs on your data is configurable and can significantly affect read performance.(a) compression(b) collection(c) compaction(d) decompressionI have been asked this question by my school teacher while I was bunking the class.This interesting question is from Cassandra with Hadoop in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct answer is (c) compaction

The best I can explain: USING the SizeTieredCompactionStrategy or DateTieredCompactionStrategy tends to cause data FRAGMENTATION when rows are frequently updated.

4.

The compression offset map grows to ____ GB per terabyte compressed.(a) 1-3(b) 10-16(c) 20-22(d) 0-1This question was posed to me during an interview for a job.Question is from Cassandra with Hadoop topic in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right choice is (a) 1-3

Explanation: The more you compress DATA, the greater number of COMPRESSED BLOCKS you have and the larger the compression offset table.

5.

Point out the wrong statement.(a) A hint indicates that a write needs to be replayed to one or more unavailable nodes(b) When the cluster cannot meet the consistency level specified by the client, Cassandra does store a hint(c) By default, hints are saved for three hours after a replica fails because if the replica is down longer than that, it is likely permanently dead(d) All of the mentionedI had been asked this question during an interview for a job.My question comes from Cassandra with Hadoop topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» CORRECT answer is (B) When the cluster cannot meet the consistency LEVEL specified by the client, CASSANDRA does store a hint

To elaborate: When the cluster cannot meet the consistency level specified by the client, Cassandra does not store a hint.
6.

You configure sample frequency by changing the ________ property in the table definition.(a) index_time(b) index_interval(c) index_secs(d) none of the mentionedI have been asked this question in a job interview.I'd like to ask this question from Cassandra with Hadoop in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT choice is (b) index_interval

The explanation: By default, the partition SUMMARY is a SAMPLE of the partition INDEX.

7.

Point out the correct statement.(a) Cassandra does not immediately remove data marked for deletion from disk(b) A deleted column can reappear if you do not run node repair routinely(c) The deletion of marked data occurs during compaction(d) All of the mentionedI got this question in an online quiz.I want to ask this question from Cassandra with Hadoop topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct choice is (d) All of the mentioned

To elaborate: Marking DATA with a TOMBSTONE signals CASSANDRA to retry sending a DELETE request to a replica that was down at the time of delete.

8.

Cassandra searches the __________ to determine the approximate location on disk of the index entry.(a) partition record(b) partition summary(c) partition search(d) all of the mentionedThe question was posed to me in an internship interview.Query is from Cassandra with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (B) partition summary

The best I can explain: If the BLOOM FILTER does not rule out the SSTABLE, Cassandra checks the partition KEY cache.

9.

_________ is a Cassandra feature that optimizes the cluster consistency process.(a) Hinted handon(b) Hinted handoff(c) Tombstone(d) Hinted tombI got this question in an online quiz.My doubt stems from Cassandra with Hadoop topic in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT CHOICE is (b) HINTED handoff

Explanation: You can enable or DISABLE hinted handoff in the cassandra.yaml file.

10.

Cassandra marks data to be deleted using _________(a) tombstone(b) combstone(c) tenstone(d) none of the mentionedThe question was posed to me during an interview for a job.My query is from Cassandra with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT option is (a) tombstone

The explanation is: CASSANDRA also does not delete in place because the SSTable is IMMUTABLE.

11.

For each SSTable, Cassandra creates _________ index.(a) memory(b) partition(c) in memory(d) all of the mentionedI had been asked this question in my homework.My question is based upon Cassandra with Hadoop topic in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» CORRECT choice is (b) partition

Best explanation: Partition index is list of partition keys and the start POSITION of ROWS in the data file (on DISK).
12.

Data in the commit log is purged after its corresponding data in the memtable is flushed to an _________(a) SSHables(b) SSTable(c) Memtables(d) None of the mentionedThis question was addressed to me by my school principal while I was bunking the class.I'm obligated to ask this question of Cassandra with Hadoop in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct CHOICE is (B) SSTable

Easiest explanation: SSTables are immutable, not written to again after the MEMTABLE is flushed.

13.

When ___________ contents exceed a configurable threshold, the memtable data, which includes indexes, is put in a queue to be flushed to disk.(a) subtable(b) memtable(c) intable(d) memorytableThe question was asked in my homework.The query is from Cassandra with Hadoop topic in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» RIGHT OPTION is (b) memtable

To explain: You can CONFIGURE the LENGTH of the queue by changing memtable_flush_queue_size in the cassandra.yaml.
14.

Cassandra creates a ___________ for each table, which allows you to symlink a table to a chosen physical drive or data volume.(a) directory(b) subdirectory(c) domain(d) pathI had been asked this question in an international level competition.The question is from Cassandra with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (B) subdirectory

The EXPLANATION: The new file name FORMAT includes the keyspace name to distinguish which keyspace and table the file CONTAINS when streaming or bulk loading DATA.

15.

Point out the wrong statement.(a) Cassandra provides fine-grained control of table storage on disk, writing tables to disk using separate table directories within each keyspace directory(b) The hinted handoff feature and Cassandra conformance and conformance to the ACID(c) Client utilities and application programming interfaces (APIs) for developing applications for data storage and retrieval are available(d) None of the mentionedThis question was posed to me in unit test.The above asked question is from Cassandra with Hadoop in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT option is (B) The hinted handoff FEATURE and CASSANDRA CONFORMANCE and conformance to the ACID

Explanation: The hinted handoff feature and Cassandra conformance and non-conformance to the ACID.

16.

__________ is one of many possible IAuthorizer implementations and the one that stores permissions in the system_auth.permissions table to support all authorization-related CQL statements.(a) CassandraAuth(b) CassandraAuthorizer(c) CassAuthorizer(d) All of the mentionedI had been asked this question during a job interview.My doubt is from Cassandra with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct CHOICE is (b) CASSANDRAAUTHORIZER

The BEST explanation: CONFIGURATION consists mainly of changing the authorizer option in the cassandra.yaml to use the CassandraAuthorizer.

17.

Point out the correct statement.(a) Cassandra accommodates expensive, consumer SSDs extremely well(b) Cassandra re-writes or re-reads existing data, and never overwrites the rows in place(c) Cassandra uses a storage structure similar to a Log-Structured Merge Tree(d) None of the mentionedThis question was addressed to me in an online interview.I want to ask this question from Cassandra with Hadoop in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct OPTION is (c) Cassandra uses a STORAGE structure similar to a Log-Structured Merge Tree

Easiest explanation: A log-structured engine that avoids overwrites and uses sequential IO to update data is ESSENTIAL for writing to HARD disks (HDD) and solid-state disks (SSD).

18.

A _________ grants initial permissions, and subsequently a user may or may not be given the permission to grant/revoke permissions.(a) keyspace(b) superuser(c) sudouser(d) none of the mentionedThis question was posed to me in semester exam.My doubt is from Cassandra with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct answer is (B) superuser

Explanation: OBJECT PERMISSION management is BASED on INTERNAL authorization.

19.

Using ___________ file means you don’t have to override the SSL_CERTFILE environmental variables every time.(a) qlshrc(b) cqshrc(c) cqlshrc(d) none of the mentionedThis question was addressed to me in a job interview.My question is taken from Introduction to Cassandra topic in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct answer is (c) cqlshrc

The best EXPLANATION: cqlsh is USED with SSL encryption.

20.

Client-to-node encryption protects data in flight from client machines to a database cluster using ___________(a) SSL(b) SSH(c) SSN(d) All of the mentionedI got this question in final exam.Origin of the question is Introduction to Cassandra topic in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct ANSWER is (a) SSL

The EXPLANATION: Client-to-node ENCRYPTION establishes a SECURE channel between the client and the COORDINATOR node.

21.

Authorization capabilities for Cassandra use the familiar _________ security paradigm to manage object permissions.(a) COMMIT(b) GRANT(c) ROLLBACK(d) None of the mentionedThis question was posed to me in an international level competition.This interesting question is from Introduction to Cassandra in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right option is (B) GRANT

The explanation: Once authenticated into a database CLUSTER using either internal authentication, the NEXT security issue to be tackled is PERMISSION management.

22.

User accounts may be altered and dropped using the __________ Query Language.(a) Hive(b) Cassandra(c) Sqoop(d) None of the mentionedI have been asked this question in an online quiz.This is a very interesting question from Introduction to Cassandra in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» CORRECT option is (B) Cassandra

The BEST I can explain: Cassandra manages user accounts and access to the DATABASE CLUSTER using passwords.
23.

Point out the wrong statement.(a) Cassandra supplies linear scalability, meaning that capacity may be easily added simply by adding new nodes online(b) Cassandra 2.0 included major enhancements to CQL, security, and performance(c) CQL for Cassandra 2.0.6 adds several important features including batching of conditional updates, static columns, and increased control over slicing of clustering columns(d) None of the MentionedI have been asked this question in homework.My query is from Introduction to Cassandra topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT answer is (d) NONE of the Mentioned

The explanation is: Cassandra is a highly scalable, eventually CONSISTENT, distributed, STRUCTURED key-value store.

24.

A __________ determines which data centers and racks nodes belong to it.(a) Client requests(b) Snitch(c) Partitioner(d) None of the mentionedThe question was posed to me in exam.My question comes from Introduction to Cassandra in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right option is (B) Snitch

The BEST I can explain: Client read or write requests can be sent to any NODE in the CLUSTER because all nodes in Cassandra are peers.

25.

Cassandra uses a protocol called _______ to discover location and state information.(a) gossip(b) intergos(c) goss(d) all of the mentionedThe question was asked in an international level competition.Query is from Introduction to Cassandra topic in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (a) GOSSIP

To EXPLAIN I would SAY: Gossip is USED for internode COMMUNICATION.

26.

Point out the correct statement.(a) Cassandra delivers continuous availability, linear scalability, and operational simplicity across many commodity servers(b) Cassandra has a “masterless” architecture, meaning all nodes are the same(c) Cassandra also provides customizable replication, storing redundant copies of data across nodes that participate in a Cassandra ring(d) All of the mentionedThis question was posed to me in an interview for internship.Question is taken from Introduction to Cassandra topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct choice is (d) All of the mentioned

To explain: Cassandra provides AUTOMATIC data DISTRIBUTION across all NODES that participate in a “RING” or DATABASE cluster.

27.

The __________ tool imports a set of tables from an RDBMS to HDFS.(a) export-all-tables(b) import-all-tables(c) import-tables(d) none of the mentionedI had been asked this question during an interview.My question comes from Sqoop with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» CORRECT OPTION is (c) import-tables

To elaborate: Data from each table is STORED in a separate DIRECTORY in HDFS.
28.

Sqoop can also import the data into Hive by generating and executing a ____________ statement to define the data’s layout in Hive.(a) SET TABLE(b) CREATE TABLE(c) INSERT TABLE(d) All of the mentionedThis question was posed to me by my school teacher while I was bunking the class.I need to ask this question from Sqoop with Hadoop in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT choice is (b) CREATE TABLE

Easiest EXPLANATION: Importing data into HIVE is as SIMPLE as adding the –hive-import option to your Sqoop COMMAND line.

29.

Apache Cassandra is a massively scalable open source _______ database.(a) SQL(b) NoSQL(c) NewSQL(d) All of the mentionedI had been asked this question during an interview.Question is taken from Introduction to Cassandra topic in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» CORRECT answer is (b) NoSQL

For explanation: CASSANDRA is perfect for managing large AMOUNTS of DATA across multiple data CENTERS and the cloud.
30.

________ does not support the notion of enclosing characters that may include field delimiters in the enclosed string.(a) Imphala(b) Oozie(c) Sqoop(d) HiveI had been asked this question during an online interview.Enquiry is from Sqoop with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct choice is (d) Hive

Best explanation: Even THOUGH Hive SUPPORTS ESCAPING CHARACTERS, it does not handle escaping of new-line character.

31.

If you set the inline LOB limit to ________ all large objects will be placed in external storage.(a) 0(b) 1(c) 2(d) 3This question was posed to me during an internship interview.I need to ask this question from Sqoop with Hadoop in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (a) 0

Explanation: The size at which lobs spill into separate files is controlled by the –INLINE-lob-limit ARGUMENT, which TAKES a parameter specifying the LARGEST lob size to keep inline, in bytes.

32.

Point out the wrong statement.(a) Avro data files are a compact, efficient binary format that provides interoperability with applications written in other programming languages(b) By default, data is compressed while importing(c) Delimited text also readily supports further manipulation by other tools, such as Hive(d) None of the mentionedI had been asked this question in my homework.My question is from Sqoop with Hadoop in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT ANSWER is (B) By default, data is compressed while importing

Explanation: You can COMPRESS your data by using the deflate (GZIP) algorithm with the -z or –compress argument, or specify any Hadoop compression codec using the –compression-codec argument.

33.

________ text is appropriate for most non-binary data types.(a) Character(b) Binary(c) Delimited(d) None of the mentionedI got this question in my homework.Question is taken from Sqoop with Hadoop topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct CHOICE is (C) Delimited

The best I can explain: Delimited text is the default IMPORT FORMAT.

34.

Data can be imported in maximum ______ file formats.(a) 1(b) 2(c) 3(d) All of the mentionedThis question was addressed to me in final exam.Question is taken from Sqoop with Hadoop in division ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct OPTION is (B) 2

Easiest explanation: You can import DATA in one of TWO file FORMATS: delimited text or SequenceFiles.

35.

Point out the correct statement.(a) The sqoop command-line program is a wrapper which runs the bin/hadoop script shipped with Hadoop(b) If $HADOOP_HOME is set, Sqoop will use the default installation location for Cloudera’s Distribution for Hadoop(c) The active Hadoop configuration is loaded from $HADOOP_HOME/conf/, unless the $HADOOP_CONF_DIR environment variable is unset(d) None of the mentionedThis question was addressed to me during an online exam.Asked question is from Sqoop with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (a) The sqoop command-line program is a wrapper which runs the bin/hadoop script shipped with Hadoop

To EXPLAIN I would SAY: If you have MULTIPLE installations of Hadoop PRESENT on your MACHINE, you can select the Hadoop installation by setting the $HADOOP_HOME environment variable.

36.

Sqoop direct mode does not support imports of ______ columns.(a) BLOB(b) LONGVARBINARY(c) CLOB(d) All of the mentionedThis question was addressed to me at a job interview.This question is from Sqoop with Hadoop in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right answer is (d) All of the mentioned

The BEST I can EXPLAIN: Use JDBC-based imports for these columns; do not supply the –direct ARGUMENT to the import TOOL.

37.

_________tool can list all the available database schemas.(a) sqoop-list-tables(b) sqoop-list-databases(c) sqoop-list-schema(d) sqoop-list-columnsThe question was posed to me at a job interview.My question is taken from Sqoop with Hadoop topic in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct option is (B) sqoop-list-databases

Easiest explanation: Sqoop ALSO INCLUDES a primitive SQL execution shell (the sqoop-eval TOOL).

38.

__________ provides a Couchbase Server-Hadoop connector by means of Sqoop.(a) MemCache(b) Couchbase(c) Hbase(d) All of the mentionedI have been asked this question in a job interview.This key question is from Sqoop with Hadoop in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT option is (a) MemCache

Easy explanation: Exports can be USED to put DATA from HADOOP into a RELATIONAL database.

39.

Microsoft uses a Sqoop-based connector to help transfer data from _________ databases to Hadoop.(a) PostreSQL(b) SQL Server(c) Oracle(d) MySQLI have been asked this question in my homework.My question is from Sqoop with Hadoop in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The correct answer is (B) SQL Server

The EXPLANATION: Sqoop is a command-line interface application for transferring DATA between relational DATABASES and Hadoop.

40.

_________ allows users to specify the target location inside of Hadoop.(a) Imphala(b) Oozie(c) Sqoop(d) HiveThe question was posed to me in semester exam.I'd like to ask this question from Sqoop with Hadoop in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right option is (c) SQOOP

To explain I would say: Sqoop is a CONNECTIVITY TOOL for moving DATA from non-HADOOP data stores – such as relational databases and data warehouses – into Hadoop.

41.

Point out the wrong statement.(a) Sqoop is used to import complete database(b) Sqoop is used to import selected columns from a particular table(c) Sqoop is used to import selected tables(d) All of the mentionedI have been asked this question during an interview for a job.The origin of the question is Sqoop with Hadoop in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct answer is (d) All of the mentioned

For explanation I would say: APACHE Sqoop is a tool which allows users to IMPORT data from relational databases to HDFS and EXPORT data from HDFS to relational DATABASE.

42.

Sqoop uses _________ to fetch data from RDBMS and stores that on HDFS.(a) Hive(b) Map reduce(c) Imphala(d) BigTOPThe question was asked by my school teacher while I was bunking the class.This key question is from Sqoop with Hadoop topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right OPTION is (B) Map reduce

The best explanation: While fetching, it THROTTLES the NUMBER of MAPPERS accessing data on RDBMS to avoid DDoS.

43.

Sqoop is an open source tool written at ________(a) Cloudera(b) IBM(c) Microsoft(d) All of the mentionedI got this question in exam.This intriguing question originated from Sqoop with Hadoop topic in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right ANSWER is (C) Microsoft

Explanation: SQOOP allows users to IMPORT data from their relational databases into HDFS and vice versa.

44.

Which of the following interface is implemented by Sqoop for recording?(a) SqoopWrite(b) SqoopRecord(c) SqoopRead(d) None of the mentionedI had been asked this question by my school teacher while I was bunking the class.Origin of the question is Sqoop with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right OPTION is (b) SQOOPRECORD

The explanation is: CLASS SqoopRecord is an interface implemented by the classes GENERATED by sqoop orm.ClassWriter.

45.

Point out the correct statement.(a) Interface FieldMapping is used for mapping of field(b) Interface FieldMappable is used for mapping of field(c) Sqoop is nothing but NoSQL to Hadoop(d) Sqoop internally uses ODBC interface so it should work with any JDBC compatible databaseThis question was addressed to me in an internship interview.My question comes from Sqoop with Hadoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right option is (b) Interface FieldMappable is used for MAPPING of field

For explanation I would say: FieldMappable Interface describes a class CAPABLE of returning a MAP of the fields of the OBJECT to their VALUES.

46.

Records are terminated by a __________ character.(a) RECORD_DELIMITER(b) FIELD_DELIMITER(c) FIELD_LIMITER(d) None of the mentionedThe question was asked by my college professor while I was bunking the class.This intriguing question originated from Introduction to Sqoop topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (a) RECORD_DELIMITER

Best EXPLANATION: Class RecordParser PARSES a RECORD CONTAINING ONE or more fields.

47.

Which of the following class is used for general processing of error?(a) LargeObjectLoader(b) ProcessingException(c) DelimiterSet(d) LobSerializerI got this question in class test.My question is from Introduction to Sqoop in chapter ZooKeeper, Sqoop and Cassandra of Hadoop

Answer» CORRECT OPTION is (B) ProcessingException

The EXPLANATION is: General error occurs during the PROCESSING of a SqoopRecord.
48.

ClobRef is a wrapper that holds a CLOB either directly or a reference to a file that holds the ______ data.(a) CLOB(b) BLOB(c) MLOB(d) All of the mentionedThe question was asked in an interview for internship.My question is based upon Introduction to Sqoop in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Right option is (a) CLOB

The explanation: CREATE a ClobRef BASED on PARSED DATA from a line of TEXT.

49.

Which of the following is a singleton instance class?(a) LargeObjectLoader(b) FieldMapProcessor(c) DelimiterSet(d) LobSerializerThis question was posed to me by my school principal while I was bunking the class.I would like to ask this question from Introduction to Sqoop topic in portion ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

The CORRECT CHOICE is (a) LargeObjectLoader

To EXPLAIN I would say: LIFETIME is limited to the current TaskInputOutputContext’s life.

50.

_________ supports null values for all types.(a) SmallObjectLoader(b) FieldMapProcessor(c) DelimiterSet(d) JdbcWritableBridgeThis question was posed to me during an interview for a job.The query is from Introduction to Sqoop topic in section ZooKeeper, Sqoop and Cassandra of Hadoop

Answer»

Correct option is (d) JdbcWritableBridge

The explanation: JdbcWritableBridge class CONTAINS a set of methods which can read DB columns from a ResultSet into JAVA types.