InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
How do we design consumer groups in Kafka for high throughput? |
|
Answer» Let’s consider a scenario where we need to read data from the Kafka topic and only after some custom validation, we can add data into some data storage system. To achieve this we would develop some consumer application which will subscribe to the topic. This ensures that our application will start receiving MESSAGES from the topic on which data validation and storage process would run eventually. Now we come across a scenario where messages publishing rate to topic exceed the rate at which it is consumed by our consumer application. If we go with a single consumer then we MAY fall behind keeping our system updated with incoming messages. The solution to this PROBLEM is by adding more consumers. This will scale up the consumption of topics. This can be easily achieved by creating a consumer group, the consortium under which similar behaviour consumers would reside which can read messages from the same topic by splitting the WORKLOAD. Consumers from the same group usually get their partition of the topic which eventually scales up message consumption and throughput. In case if we have a single consumer for a given topic with 4 partitions then it will read messages from all partitions : The ideal architecture for the above scenario is as below when we have four consumers reading messages from INDIVIDUAL partition : Even in the case of more consumers then partition results in consumer sitting idle, which is also not good architecture design: There is another scenario as well where we can have more than one consumer groups subscribed to the same topic: |
|
| 2. |
What is the poll loop in Kafka? |
|
Answer» As we know that consumer system subscribes to topics in Kafka but it is Pooling loop which informs consumers if any new data has arrived or not. It is poll loop responsibility to handle coordination, partition rebalances, heartbeats, and data fetching. It is the core function in consumer API which keeps polling the server for any new data. Let's try to understand polling look in Kafka : try { while (TRUE) { ConsumerRecords<STRING, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { log.debug("topic = %s, partition = %d, OFFSET = %d," customer = %s, country = %s\n", record.topic(), record.partition(), record.offset(), record.key(), record.value()); int updatedCount = 1; if (custCountryMap.countainsValue(record.value())) { updatedCount = custCountryMap.get(record.value()) + 1; } custCountryMap.put(record.value(), updatedCount) JSONObject json = new JSONObject(custCountryMap); System.out.println(json.toString(4))
|
|
| 3. |
In a consumer group, what is the process of assigning a partition to a particular consumer? |
|
Answer» The first step for any consumer to join any consumer group is RAISING a request to the group coordinator. There is a group LEADER in a consumer group which is usually the first member of the group. The group leader gets the list of all members from co-ordinator. It keeps track of all the consumers which have recently contributed in the group are considered alive while other members are off tracked from the system. It is the responsibility of the group leader to ASSIGN partitions to individual consumers. It implements PartitionAssignor to assign partitions. There is an in-built partition policy to assign a partition to consumers. Once the partition is done, group leader sends that information to group co-ordinator which in turn inform respective consumers about their assignments. Individual consumers have only knowledge of respective assignments while group leader keeps track of all assignments. This whole process is called partition rebalancing. This happens whenever any NEW consumer joins the groups or exits the group. This step is very critical to PERFORMANCE and high throughput of messages. |
|
| 4. |
How is multi-tenancy achieved in Kafka? |
|
Answer» Multi-tenancy system allows multiple client service at the same time. There is inbuilt support on multi-tenancy if we are not concerned with isolation and security. So Kafka is already a multi-tenant system as everyone can read/write data to Kafka broker. But in the real multi-tenant system should provide isolation and security to provide multiple client servicing. The security and isolation can be achieved by doing below set up :
The two WAY SSL can be used for authentication/authorization. We can also use token-based identity PROVIDER for the same purpose. We can also set up role-based access to the topic using ACLs. |
|
| 5. |
Explain the anatomy of the Kafka topic? |
|
Answer» The topic is a very important feature of Kafka architecture. The messages are grouped into a topic. The producer system SENDS messages to a specific topic while consumer system read messages from a specific topic only. Further messages in the topic are distributed into several PARTITIONS. The partition ensures same topic data is replicated across multiple brokers. The individual partition can RESIDE on an individual machine which allows message reading from same topic parallel. The multiple subscriber systems can PROCESS data from multiple partitions which result in high messaging throughput. The unique identifier is tagged with each message WITHIN a partition which is called offset. The offset is sequentially incremented to ensure ordering of messages. The subscriber system can read data from the specified offset but at the same time, they are allowed to read data from any other offset point as well. |
|
| 6. |
What are the main features of Kafka that make it suitable for data integration and data processing in real-time? |
|
Answer» The Kafka which has established itself as a market LEADER in stream processing platform. It is one of the popular message broker platforms. It works on the publisher-subscriber model of messaging. It provides decoupling between producer and consumer system. They are unaware of each other and work INDEPENDENTLY. The consumer system has no information on the source system which has pushed the messages into Kafka system. The producer systems publish messages on the topic(tagging of messages in a group called topic) and messages are broadcasted to consumer systems which are subscribed to those topics. It is event-driven architecture and solves most of the problems faced by the traditional messaging platform. The key features like data partitioning, scalability, low LATENCY and HIGH throughput are the reason why it has BECOME a top choice for any real-time data integration and data processing needs. |
|
| 7. |
How Kafka fit in microservices architecture? |
|
Answer» Regular micro services arrangements will have MANY microservices collaborating, and that is a colossal issue if not taken care of appropriately. It isn't practical for each service to have an immediate association with each service that it needs to converse with for 2 reasons: First, the number of such associations would develop quickly; Second, the services being called might be down or may have moved to another server. On the off chance that you have 2 services, at that point, there are up to 2 direct associations. With 3 services, there are 6. With 4 services, there are 12, etc. As it were, such associations can be seen as the coupling between the objects in an OO program. You have to cooperate with different objects yet the lesser the coupling between their classes, the more sensible your program is. Message Brokers are a method for decoupling the sending and accepting services through the idea of Publish and Subscribe. The sending service (maker) posts it message/load on the message queue and the accepting service (consumer), which is tuning in for messages, will get it. Message Broking is one of the key use cases for Kafka. Something else Message Brokers do is a queue or hold the message till the time consumer lifts it. On the off chance that the customer service is down or occupied when the sender sends the message, it can generally take it up later. The RESULT of this is the PRODUCER services doesn't NEED to stress over checking if the message has gone through, retry on failure, and so on. Kafka is incredible because it enables us to have both Pub-Sub just as queuing highlights (generally, it is possible that either was upheld by such intermediaries). It additionally ensures that the request of the messages is kept up and not expose to arrange idleness or different elements. Kafka likewise enables us to "communicate" messages to different consumers, if necessary. Kafka importance can be understood in building reliable, scalable microservices solution with MINIMUM configuration. |
|
| 8. |
Explain producer API in Kafka? |
||||||||||||||||||||||
|
Answer» The core part of Kafka producer API is “KafkaProducer” CLASS. Once we instantiate this class, it allows the option to connect to Kafka broker inside its constructor. It has the method “send” which allows the producer system to send messages to topic asynchronously:
The Kafka producer has one flush method which is used to ensure previously sent messages are cleared from the buffer.
The producer is broadly classified into two types: Sync & Async A message is sent directly to the broker in sync producer while it passes through in the background in case of an async producer. Async producer is used in case we need high throughput The following are the configuration settings listed in producer API :
ProducerRecord class constructor is used to create a record with key, value pairs and without partition.
ProducerRecord class creates a record without partition and key.
|
|||||||||||||||||||||||
| 9. |
what is the consumer group in Kafka? |
|
Answer» Let's first understand the concept of the consumer in Kafka architecture. The consumers are the system or process which subscribe to topics created at the Kafka BROKER. The producer's system sends messages to topics and once messages are committed successfully then only subscribers systems are allowed to read the messages. The consumer group is tagging of consumers system in such a way to make it multi-threaded or multi-machine system. As we can see in the above diagram, two consumers 1 & 2 are being tagged in the same group. Also, we can see that individual customers reading DATA from different partition of topics. Some common characteristic of consumer groups are as follows:
The recommendation for the consumer group suggests having a similar number of consumer instances in line with several partitions. In case if we will go with a greater number of consumers then it will result in excess customers sitting idle resulting in wasting resources. In the case of partitions numbers greater then it will result in the same consumers reading from more than one partition. This should not be an issue until the time ordering of messages is not IMPORTANT for the use case. Kafka does not have inbuilt SUPPORT for the ordering of messages across different partitions. This is the reason why Kafka recommends to have the same number of consumers in line with partitions to maintain the ordering of messages. |
|
| 10. |
What is the difference between a shared message queue and traditional publisher-subscriber message queue? |
|
Answer» Shared message Queue A shared message framework takes into account a surge of messages from a producer to serve a SINGLE customer. Each message pushed to the framework is perused just once and just by one customer. The consumers pull messages from the queue end only. Queuing frameworks at that point expel the message from the line once pulled effectively. Downsides:
Traditional Publisher Subscribe Systems The publisher-subscriber MODEL CONSIDERS various publishers to distribute messages to subjects facilitated by message brokers which can be subscribed by different endorsers. A message is in this way communicated to every one of the supporters of a subject. Downsides:
|
|
| 11. |
Explain steps for Kafka installation? |
|
Answer» One can easily FOLLOW the below steps to install Kafka : STEP 1: Ensuring java is installed on the MACHINE by running below command in CMD : $ java -versionYou will be able to see a version of java if it is installed. In case Java is not installed we can follow below steps to install java successfully: 1: Download the latest JDK by visiting below link: JDK 2: Extract the executables and then move to Opt directory. 3: Next step is setting the local path for the JAVA_HOME variable. We can set this by running below command in ~/.bashrc file.4: Ensure above changes are in sync in the running system along with making changes in java alternative by invoking command Step 2: Next step is ZooKeeper framework installation by visiting the below link: ZooKeeper 1: Once the files have been extracted we need to modify the config file before starting ZooKeeper server. We can follow below command to open “conf/zoo.cfg” After making the changes ensure config file get saved before executing the following command to start the server : $bin/zkServer.sh startOnce you execute above command below response can be seen: $JMX enabled by default $Using config: /Users/../zookeeper-3.4.6/bin/../conf/zoo.cfg $Starting zookeeper ...STARTED2: Next step is starting CLI $ bin /zkCli.sh The above command will ensure we connect to zookeeper and below response will come:Connecting to localhost:2181 …………………… …………………… ……………………. Welcome to ZooKeeper! …………………… …………………… WATCHER:: WatchedEvent state:SyncConnected type: None path:null [zk: localhost:2181(CONNECTED) 0]We can also stop the ZooKeeper server after doing all the basic validations : Step 3: Now we can move to apache Kafka installation by visiting the below link: Kafka Once Kafka is downloaded locally we can extract the files by running the command : The above command will ensure Kafka installation. After Kafka installation we need to start Kafka server:
|
|
| 12. |
What is the main difference between Kafka and Flume? |
||||||||||||
|
Answer» Kafka and flume both are offerings from Apache software only but there are some key differences. Please find below an overview of both to understand the differences : Kafka
Flume
1. Flume: When working with non-social information sources, for example, log documents which are to be gushed into Hadoop. Kafka: When needing a very dependable and versatile enterprise-level framework to interface numerous various frameworks (Including Hadoop) 2. Kafka for Hadoop: Kafka resembles a pipeline that gathers information continuously and pushes to Hadoop. Hadoop forms it inside and after that according to the prerequisite either serve to different consumers(Dashboards, BI, and so on) or stores it for further handling.
|
|||||||||||||
| 13. |
How do we send large messages with Kafka? |
|
Answer» Kafka system by default does not handle the large size of data. The data max size is 1 MB but there are ways to increase that size. We should also ensure to increase the network buffers as well for our consumers and producers system. We need to adjust a few PROPERTIES to achieve the same :
Now it is clear that if we would like to SEND large Kafka messages then it can be easily achieved by tweaking few properties explained above. The broker related config can be FOUND at $KAFKA_HOME/config/server.properties while consumer-related config found at $KAFKA_HOME/config/consumer.properties |
|
| 14. |
How do we achieve FIFO behaviour in Kafka? |
|
Answer» Kafka stores messages in topics which in turn gets stored in different partitions. The partition is an immutable sequence of ordered messages which is CONTINUOUSLY appended to. A message is uniquely identified in the partition by a sequential number called offset. The FIFO BEHAVIOUR can only be achieved inside the partitions. We can achieve FIFO behaviour by following the steps below : 1: We need to first SET enable auto-commit PROPERTY false : Set enable.auto.commit=false 2: Once messages get processed, we should not make a CALL to consumer.commitSync(); 3: We can then call to “subscribe” and ensure the registry of consumer system to the topic. 4: The Listener consumerRebalance should be implemented and within a listener, we should call consumer.seek(topicPartition, offset). 5: Once the message gets processed, the offset associated with the message should be stored along with the processed message. 6: We should also ensure idempotent as a safety measure. |
|
| 15. |
What does series in Kafka? |
|
Answer» The series stands for serialization and deserialization concept. A SerDe is a combination of a Serializer and Deserializer (hence, Ser-De). Every application must offer support for serialization and deserialization for record key and values so when materialization is needed, it can be achieved easily. The serialization which involves converting messages into a STREAM of bytes for transmission over NETWORKS. The array of bytes then get stored on the Kafka queue. The deserialization is just reverse of serialization which ensures an array of streams of DATA get converted into meaningful data. When the producer system sends meaningful data to the broker, the serialization ensures transmission and storage of data in the form of a BYTE of an array. The consumer system which read data from the topic in the form of a byte of an array but at consumer end that byte of an array must need to be deserialized successfully to convert into meaningful data. CONFIGURING SerDes: import org.apache.kafka.common.serialization.Serdes; import org.apache.kafka.streams.StreamsConfig; Properties settings = new Properties(); // Default serde for keys of data records (here: built-in serde for String type) settings.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName()); // Default serde for values of data records (here: built-in serde for Long type) settings.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass().getName()); StreamsConfig config = new StreamsConfig(settings); |
|
| 16. |
What is partition key in Kafka? |
|
Answer» As we know that producers publish messages to a DIFFERENT partition of the topic. Messages consist of chunks of data. Along with the data producer system also send one KEY. This key is called a partition key. The data which comes with unique key always gets stored in the same partition. Consider a real-world system where we have to track the user’s activity while using the application. We can store the user’s data using the partition key in the same partition. So basically the user’s data being tagged with key helps US in ACHIEVING this objective. Let's say if we have to store user u0 data into partition p0 then we can TAG u0 data with some unique key which will ensure that user’s data always gets stored in partition p0. But it does not mean that p0 partition can not store other user’s data. To summarize partition key is used to validate messages and knowing destination partition where messages will be stored. Let's have a look at the below diagram which clearly explains the usage of partition key in Kafka architecture. |
|
| 17. |
How do we start the Kafka server? |
|
Answer» Kafka is dependent on ZooKeeper so we must first start ZooKeeper before starting Kafka SERVER. Please find below the step by step process to start the Kafka server : 1:Starting the ZooKeeper by typing the following command in a terminal : 2: Once ZooKeeper STARTS running, we can start the Kafka server by running the following command 3: The NEXT step is checking the SERVICES running in backend by checking below commands : 4: Once the Kafka server starts running, we can create a topic by running below command : 5: We can check the available topic by triggering below command in terminal : |
|
| 18. |
What is geo-replication in Kafka? |
|
Answer» The geo-replication enables replication across different data or different clusters. The Kafka mirror maker enables geo-replication. This process is called mirroring. The mirroring process is different from replication across different nodes in the same CLUSTER. Kafka’s mirror maker ensure messages from topic belonging to one or more Kafka clusters are replicated to DESTINATION cluster with same topic names. We should USE at least one mirror maker to replicate one source cluster. We can have multiple mirror maker processes to mirror topics within the same consumer groups. This enables HIGH throughput and enable the system. If one of the mirror maker processes goes down, the other can take over the additional load. One thing is very important here as the source and destination clusters are independent of each other having different PARTITION and offsets. |
|
| 19. |
What is the core API in Kafka? |
|
Answer» There are four core API which is available in Kafka. Please find below an overview of the core API :
|
|
| 20. |
How Apache Kafka is different then rabbitMQ? |
|
Answer» Both belong to the same league of messages STREAMING PLATFORM. RabbitMQ belong to the traditional league of the messaging platform which supports several protocols. It is an open-source message broker platform with a reasonable number of features. It supports the AMQP messaging protocol with the routing feature. Kafka was written in Scala and first introduced in LinkedIn to facilitate intrasystem communication. Now Kafka is being developed under the umbrella of Apache software and more suitable in an event-driven ecosystem. Now LET's compare both the platform. Kafka is a distributed, scalable and high throughput system as compared to rabbitMQ. In terms of performance as well as Kafka scores much better. The RabbitMQ can process only 20000 messages per second while Kafka can process 5 times more messages. Please find below diagram detailing out KEY differences. Source |
|
| 21. |
What is Kafka Mirror Maker? |
|
Answer» Kafka supports data replication within the cluster to ENSURE high AVAILABILITY. But ENTERPRISES often need data availability guarantees to span the entire cluster and EVEN withstand site failures. The solution to this is Mirror Maker – a utility that helps replicate data between two Kafka clusters within the same or different data centers. MirrorMaker is essentially a Kafka consumer and producer hooked together. The origin and destination clusters are COMPLETELY different entities and can have a different number of partitions and offsets, however, the topic names should be the same between source and a destination cluster. The MirrorMaker process also retains and uses the partition key so that ordering is maintained within the partition. |
|
| 22. |
What is a producer in Kafka? What are the different types of Kafka producer APIs? How does Kafka producer write data to a topic containing multiple partitions? |
|
Answer» A producer publishes messages to one or more Kafka topics. The message contains information related to what topic and partition should the message be published to. There are three DIFFERENT types of producer APIs –
Kafka messages are key-value pairs. The key is used for partitioning messages being sent to the topic. When writing a message to a topic, the producer has an option to provide the message key. This key determines which partition of the topic the message goes to. If the key is not specified, then the messages are sent to partitions of the topic in round robin fashion. Note that Kafka orders messages only inside a partition, hence choosing the right partition key is an important factor in application design. |
|
| 23. |
Suggest some use cases or scenarios where Kafka is a good fit? What are the use cases in which you would prefer to use a messaging system other than Kafka? |
|
Answer» Kafka is a durable, distributed and scalable messaging system designed to support high volume transactions. Use cases that require a publish-subscribe mechanism at high throughput are a good fit for Kafka. In case you need a point to point or request/reply type communication then other messaging queues like RabbitMQ can be considered. Kafka is a good fit for real-time stream processing. It USES a DUMB broker smart consumer model with the broker merely acting as a message store. So a scenario wherein the consumer cannot be smart and requires a broker to smart instead is not a good fit for Kafka. In such a case, RabbitMQ can be used which uses a smart broker model with the broker responsible for consistent delivery of messages at a roughly similar PACE. Also in cases where protocols like AMQP, MQTT, and FEATURES like message ROUTING are needed, in those cases, RabbitMQ is a better alternative over Kafka. |
|
| 24. |
How can Kafka producer maintain exactly once semantics? |
|
Answer» With Kafka messaging system, three DIFFERENT types of semantics can be achieved.
Kafka transactions help achieve exactly once semantic between Kafka brokers and clients. In order to achieve this we need to set below properties at producer end – enable.idempotence=true and transactional.ID=<some unique id>. We also need to call initTransaction to PREPARE the producer to use transactions. With these properties set, if the producer (characterized by producer id> accidentally sends the same message to Kafka more than once, then Kafka BROKER detects and de-duplicates it. |
|
| 25. |
What is meant by Consumer Lag? How can you monitor it? |
|
Answer» Kafka follows a pub-sub mechanism wherein PRODUCER writes to a topic and one or more consumers read from that topic. However, Reads in Kafka always lag BEHIND Writes as there is always some DELAY between the moment a message is written and the moment it is consumed. This delta between LATEST Offset and Consumer Offset is called Consumer Lag There are various open source tools available to measure consumer lag e.g. LinkedIn Burrow. Confluent Kafka comes with out of the box tools to measure lag. |
|
| 26. |
Package which need to import in java/scala? |
Answer»
|
|
| 27. |
Maven dependencies needed for Kafka? Below maven dependency is enough to configure the Kafka ecosystem in the application |
|
Answer» groupId = org.apache.spark artifactId = spark-streaming-kafka-0-10_2.11 VERSION = 2.2.0 groupId = org.apache.ZOOKEEPER artifactId = zookeeper version=3.4.5 This dependency comes with CHILD dependency which will DOWNLOAD and add to the application as a PART of parent dependency. |
|
| 28. |
Role of zookeeper in Kafka? |
|
Answer» Zookeeper is a separate component, which is not a MANDATORY component to IMPLEMENT with Kafka, HOWEVER when we need to implement cluster, we have to setup as a coordination server.
Zookeeper plays a significant role when it comes to cluster management LIKE fault tolerant and identify when one cluster down its REPLICATE the messages to other cluster. |
|
| 29. |
What is a architecture of Zookeeper? |
|
Answer» ZOOKEEPER is a DISTRIBUTED open source configuration, synchronization SERVICE along with the naming registry for distributed application. |
|
| 30. |
What is the use case where Kafka doesn’t fit? |
|
Answer» Considering the advantages, to setup and configure the Kafka ecosystem is bit difficult and one NEEDS a good knowledge to IMPLEMENT, apart from that I listed some more use case.
|
|
| 31. |
What are the key advantages of using Kafka? |
|
Answer» APART from other BENEFITS, below are the key advantages of using Kafka messaging framework.
Considering all the above advantages, Kafka is one of the most popular frameworks utilize in MICRO service architecture, Big Data architecture, Enterprise Integration architecture, publish-subscribe architecture. |
|
| 32. |
What is the working principle of Kafka? |
|
Answer» The working principle of Kafka follows the below order.
|
|
| 33. |
What is a role of consumer in Kafka? |
|
Answer» Consumer is a subscriber who consume the messages which predominantly stores in a partition. Consumer is a separate process and can be separate application altogether which run in INDIVIDUAL machine.
If all the consumer falls into the same consumer group, then by using load balancer the message will be DISTRIBUTED over the consumer instances, if consumer instances falls in different group, than each message will be broadcast to all consumer group. |
|
| 34. |
How producer works in the Kafka? |
|
Answer» Producer is a client who send or publish the record. Producer APPLICATIONS write data to topics and consumer applications read from topics.
Messages sent by a producer to a topic partition will be APPENDED in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log. |
|
| 35. |
What is Kafka cluster and what is the key benefits of creating Kafka cluster? |
Answer»
|
|
| 36. |
What ensures load balancing in Kafka? |
|
Answer» The leader and follower nodes serve the purpose of load balancing in Kafka. As we know that leader does the actual writing/reading of data in a given partition while follower systems do the same in passive mode. This ensures data gets REPLICATED ACROSS different nodes. In case of any failure DUE to any reasoning system, SOFTWARE upgrade data remains available. If the leader system goes down for any reason then follower system which was working in the passive mode now becomes a leader and ensure data remains available to EXTERNAL system irrespective of any internal outage. The load balancer does the same thing, it distributes loads across multiple systems in caseload gets increased. In the same way, balances load by replicating messages on different systems and when the leader system goes down, the other follower system becomes the leader and ensure data is available to subscribers system. |
|
| 37. |
What are the main advantages of using Kafka? |
|
Answer» Although there is a score of benefits of using Kafka, we can list down some key benefits which are making the tool more popular in-stream messaging platforms. The below diagram has summarized the key benefits :
|
|
| 38. |
Within the producer can you explain when will you experience QueueFullException occur? |
|
Answer» As the name suggests this is the exception which occurs when producer SYSTEMS SENDING more messages above and beyond the capacity of the broker system then brokers would not be able to handle the same. The QUEUE gets full at broker end so no incoming request can be handled any more. As producer systems do not have any information on the capacity of the broker system RESULTS in such exceptions. The messages GET overflowed at broker end. To avoid such a scenario we should have multiple systems working as a broker system so messages can be evenly distributed across multiple systems. The clusters environment where we have multiple nodes servicing the message processing avoid such exceptions to occur. The clustering, partitioning helps in avoiding any such exceptions. |
|
| 39. |
What is replication critical in Kafka environment? |
|
Answer» It is one of the key CONCEPTS in Kafka. The replication ensures data is safe and secure even in case of system failure. The Kafka STORES the published messages which are broadcasted to subscribed systems but what if Kafka server goes down, will published messages be available when the system goes down? The answer is yes and it is all possible due to replication behaviour where messages are replicated across multiple servers or nodes. There could be multiple reasons for system failure like program failure, system error or frequent software upgrades. The replication safeguard published data in case of any such failures. The fault-tolerant behaviour is one of the key reasons why Kafka has become a MARKET leader in a very short period of time. ISR which stands for In sync REPLICAS ensures sync between the leader and follower systems. If replicas are not in sync with ISR then it points that follower systems are lagging behind leaders and not catching up with leader activities. Please refer to the below DIAGRAM which will make your understanding more crystal clear : |
|
| 40. |
What is leader and follower in Kafka environment? |
|
Answer» In a cluster, partitions are distributed across nodes where each server share the responsibility of request processing for individual partitions. Even partitions are replicated across several nodes CONFIGURED in a Kafka environment. This is the reason why Kafka is the system. The system or node which act as a primary server for each partition is termed as a leader while other systems where data gets replicated is called a follower. It is a leader's responsibility to read or write data on given partitions but followers system PASSIVELY REPLICATES data of leader’s system. In case of failure of the leader system, any of the follower systems becomes a leader. Now we can UNDERSTAND cluster environment architecture where each system playing the role of leader for some of the partitions while other system playing as a follower and making Kafka popular as a fault-tolerant system. This is the reason why Kafka has taken a centre stage in-stream MESSAGING platforms. |
|
| 41. |
What is an offset in Kafka? |
|
Answer» One of the critical information which is required for any producer/CONSUMER system to write /read messages in different partition. It is a unique identifier associated with each message in different partitions. This is a sequence number which goes on increasing as messages come in the partition. The ZooKeeper system keeps track of an offset associated with any specific TOPIC stored in a specific partition. There is a class ConsumerRecord having method offset which helps CONSUMERS to get offset details related to a specific topic in a given partition. Once consumer systems get to know about the offset, it helps to identify messages in topics. Kafka stores the messages published for some time depending upon configuration. Let's assume if the retention period is defined as 4 days then messages will be stored in Kafka for 4 days irrespective of messages read or not by consumer systems. The memory space will be freed up only after 4 days. The offset is very important for record and it is maintained by every customer reading the record. Once it reads the messages, it increases the offset linearly. But a consumer can read the messages in any order. It can go to some old offset or MOVE to new offset as per the requirement. Let's have a look at the below diagram which will give more clarity on offset CONCEPT:
|
|
| 42. |
What is ZooKeeper in Kafka? Can we use Kafka without ZooKeeper? |
|
Answer» ZooKeeper is an open-source system which helps in managing the cluster environment of Kafka. As we know that Kafka brokers work in a cluster environment where several servers PROCESS the incoming messages before broadcasting to subscribed consumers. It is an integral part of Kafka architecture. ONE can not USE Kafka without ZooKeeper. The client servicing becomes unavailable once ZooKeeper is down. It facilitates communication between different nodes of the cluster. In a cluster environment, only one node WORKS as a leader while others are followers system only. It is a ZooKeeper responsibility to choose a leader in a cluster node. Whenever any server added or removed in a cluster, topic added or removed, ZooKeeper sends that information to each node. This helps in better coordination between different nodes. Choosing leadership node, better SYNCHRONIZATION between different nodes, configuration management are the key roles of ZooKeeper. |
|
| 43. |
what are the different components of Kafka? |
|
Answer» There are some important components of any Kafka architecture. PLEASE find below an overview of components:
The below diagram explains Producer, Topic, Partition consumer & consumer group. |
|
| 44. |
What is Kafka and what are other alternatives to Kafka? |
|
Answer» It is an open-source message broker system developed by LinkedIn and supported by Apache. The underlying TECHNOLOGY being used in Kafka is Java and Scala. It supports the publisher-subscriber model of COMMUNICATION where publisher publishes messages and subscribers get notified when any new message gets published. It is categorised in distributed streaming messages software. Earlier we were using messaging QUEUE and different enterprise messaging systems like RabbitMQ and many others for the same purpose but Kafka has become an industry leader in quick time. It is being used in building stream pipeline in high volume applications to reliably transfer/transform data between different systems. It has an inbuilt fault-tolerant system to store messages. It is distributed and supports PARTITION as well. Kafka runs in a clustered environment which makes it. It is gaining popularity due to its high throughput of messages in a microservice architecture. The software component which publishes messages is called producer while consumers are the one to which messages are broadcasted. In the below diagram you can see different PARTS of Kafka system: |
|
| 45. |
What is meant by Kafka producer Acknowledgement? What are the different types of acknowledgment settings provided by Kafka? |
|
Answer» An ack or ACKNOWLEDGMENT is sent by a broker to the producer to acknowledge receipt of the message. Ack level can be set as a configuration parameter in the Producer and it defines the number of ACKNOWLEDGMENTS the producer requires the leader to have received before considering a request COMPLETE. The following settings are allowed:
In this case, the producer doesn’t wait for any acknowledgment from the broker. No guarantee can be that the broker has received the record.
In this case, the leader writes the record to its local log file and responds back WITHOUT waiting for acknowledgment from all its followers. In this case, the message can get lost only if the leader fails just after acknowledging the record but before the followers have replicated it, then the record would be lost.
In this case, a set leader waits for all entire sets of in sync replicas to acknowledge the record. This ensures that the record does not get lost as long as one replica is alive and provides the strongest possible guarantee. However it also considerably lessens the throughput as a leader must wait for all followers to acknowledge before responding back. acks=1 is usually the preferred way of sending records as it ensures receipt of record by a leader, thereby ensuring high durability and at the same time ensures high throughput as well. For highest throughput set acks=0 and for highest durability set acks=all. |
|
| 46. |
What is an offset in Kafka? What are the different ways to commit an offset? Where does Kafka maintain offset? |
|
Answer» As we already know, a Kafka topic is divided into partitions. The data inside each partition is ordered and can be accessed using an offset. Offset is a position within a partition for the next message to be SENT by the consumer. There are TWO types of offsets maintained by Kafka: Current Offset
Committed Offset
There are two ways to commit an offset:
Prior to Kafka v0.9, Zookeeper was being used to store topic offset, however from v0.9 ONWARDS, the information regarding offset on a topic’s partition is stored on a topic called _consumer_offsets. |
|
| 47. |
What is meant by fault tolerance? How does Kafka provide fault tolerance? |
|
Answer» Kafka is a distributed system WHEREIN data is stored across multiple nodes in the cluster. There is a high probability that one or more nodes in the cluster might fail. Fault tolerance means that the data is the system is protected and available even when some of the nodes in the cluster fail. One of the WAYS in which Kafka provides fault tolerance is by making a copy of the partitions. The default replication factor is 3 which means for every partition in a topic, TWO copies are maintained. In case one of the broker fails, data can be fetched from its replica. This WAY Kafka can withstand N-1 failures, N being the replication factor. Kafka also follows the leader-follower model. For every partition, one broker is elected as the leader while others are designated, followers. A leader is responsible for interacting with the producer/consumer. If the leader node goes down, then one of the remaining followers is elected as a leader. Kafka also maintains a list of In Sync replicas. Say the replication factor is 3. That means there will be a leader partition and two follower partitions. However, the followers may not be in sync with the leader. The ISR shows the list of replicas that are in sync with the leader. |
|
| 48. |
What is Dumb Broker/Smart Producer vs Smart Broker/Dumb Consumer? What model does Apache Kafka follow? |
|
Answer» Dumb broker/Smart producer implies that the broker does not attempt to track which messages have been READ by each consumer and only retain unread messages; rather, the broker retains all messages for a set amount of time, and consumers are responsible to track what all messages have been read. Apache Kafka EMPLOYS this MODEL only wherein the broker does the work of storing messages for a time (7 days by default), while consumers are responsible for keeping track of what all messages they have read using offsets. The opposite of this is the Smart Broker/Dumb Consumer model wherein the broker is focused on the consistent delivery of messages to consumers. In such a CASE, consumers are dumb and consume at a roughly similar pace as the broker keeps track of consumer STATE. This model is followed by RabbitMQ. |
|
| 49. |
Let’s say that a producer is writing records to a Kafka topic at 10000 messages/sec while the consumer is only able to read 2500 messages per second. What are the different ways in which you can scale up your consumer? |
|
Answer» The answer to this question encompasses two main aspects – Partitions in a TOPIC and Consumer Groups. A Kafka topic is divided into partitions. The message sent by the producer is distributed among the topic’s partitions based on the message key. Here we can assume that the key is such that messages would get equally distributed among the partitions. Consumer Group is a way to bunch together consumers so as to INCREASE the throughput of the consumer application. Each consumer in a group latches to a partition in the topic. i.e. if there are 4 partitions in the topic and 4 consumers in the group then each consumer would read from a SINGLE partition. However, if there are 6 partitions and 4 consumers, then the data would be read in parallel from 4 partitions only. HENCE its IDEAL to maintain a 1 to 1 mapping of partition to the consumer in the group. Now in order to scale up processing at the consumer end, two things can be done:
Doing this would help read data from the topic in parallel and hence scale up the consumer from 2500 messages/sec to 10000 messages per second. |
|
| 50. |
What is Broker and how Kafka utilize broker for communication? |
Answer»
|
|