Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What do you understand about the DFS (Depth First Search) algorithm.

Answer»

Depth First Search or DFS is a technique for traversing or exploring DATA structures such as trees and graphs. The algorithm starts at the root node (in the case of a graph, any random node can be USED as the root node) and EXAMINES each branch as far as feasible before retracing. So the basic idea is to start at the root or any arbitrary node and mark it, then advance to the next unmarked node and repeat until there are no more unmarked nodes. After that, go back and check for any more unmarked nodes to cross. Finally, print the path's nodes. The DFS algorithm is given below:

  • Step1: Create a recursive FUNCTION that takes the node's index and a visited ARRAY as input.
  • Step 2: Make the current node a visited node and print it.
  • Step 3: Call the recursive function with the index of the adjacent node after traversing all nearby and unmarked nodes.
2.

What do you understand about the BFS (Breadth First Search) algorithm.

Answer»

BFS or Breadth-First Search is a graph traversal technique. It begins by traversing the graph from the ROOT node and explores all of the nodes in the immediate vicinity. It chooses the closest node and then visits all of the nodes that have yet to be visited. Until it reaches the objective node, the algorithm repeats the same METHOD for each of the closest nodes. 

The BFS Algorithm is given below:

  • Step 1: Set status = 1 as the first step for all the nodes(ready state).
  • Step 2: Set the status of the initial node A to 2, that is, waiting state.
  • Step 3: Repeat steps 4 and 5 until the queue is not empty.
  • Step 4: Dequeue and PROCESS node N from the queue, setting its status to 3, that is, the processed state.
  • Step 5: Put all of N's NEIGHBOURS in the ready state (status = 1) in the queue and set their status to 2 (waiting state)
  • Step 6: Exit.
3.

Write down a string reversal algorithm. If the given string is "kitiR," for example, the output should be "Ritik".

Answer»

An algorithm for string reversal is as follows:

  • Step 1: Start.
  • Step 2: We take two variables l and r.
  • Step 3: We set the values of l as 0 and r as (LENGTH of the string  - 1).
  • Step 4: We INTERCHANGE the values of the characters at positions l and r in the string.
  • Step 5: We increment the value of l by one.
  • Step 6: We DECREMENT the value of r by one.
  • Step 7: If the value of r is greater than the value of l, we go to step 4
  • Step 8: STOP.
4.

What do you understand about the Dynamic Programming (DP) Algorithmic Paradigm? List a few problems which can be solved using the same.

Answer»

Dynamic Programming is primarily a RECURSION optimization. We can use Dynamic Programming to optimise any recursive solution that involves repeated calls for the same inputs. The goal is to simply save the results of subproblems so that we do not have to recalculate them later. The time complexity of this simple optimization is reduced from exponential to polynomial. For example, if we create a simple recursive solution for Fibonacci Numbers, the time complexity is exponential, but if we optimise it by storing subproblem answers using Dynamic Programming, the time complexity is linear. 

The following codes illustrate the same:

With Recursion (no DP): The time complexity of the given code will be exponential.

/*Sample C++ code for finding nth fibonacci number without DP*/int nFibonacci(int n){ if(n == 0 || n == 1) return n; else return nFibonacci(n - 1) + nFibonacci(n - 2);}

With DP: The time complexity of the given code will be linear because of Dynamic Programming.

/*Sample C++ code for finding nth fibonacci number with DP*/int nFibonacci(int n){ vector<int> fib(n + 1); fib[0] = 0; fib[1] = 1; for(int i = 2;i <= n;i ++){ fib[i] = fib[i - 1] + fib[i - 2]; } return fib[n]; }

A few problems which can be solved using the Dynamic Programming (DP) Algorithmic Paradigm are as follows:

  • Finding the nth Fibonacci number
  • Finding the Longest Common Subsequence between two strings.
  • Finding the Longest Palindromic SUBSTRING in a string.
  • The discrete (or 0-1) Knapsack Problem.
  • Shortest Path between any two nodes in a graph (FLOYD Warshall Algorithm)
5.

Write an algorithm for counting the number of leaf nodes in a binary tree.

Answer»

An algorithm for counting the number of leaf nodes in a binary tree is given below:

  • STEP 1: If the current node is null, RETURN a value 0.
  • Step 2: If a leaf node is encountered, that is, if the current node's left and RIGHT nodes are both null, then return 1.
  • Step 3: Calculate the number of leaf nodes recursively by adding the number of leaf nodes in the left subtree by the number of leaf nodes in the right subtree.
6.

Write down an algorithm for adding a node to a linked list sorted in ascending order(maintaining the sorting property).

Answer»

An algorithm for adding a node to a link list sorted in ASCENDING order (maintaining the sorting property) is given below:

  • Step 1: Check if the linked list has no value (or is empty). If yes, then set the new node as the HEAD and return it.
  • Step 2: Check if the value of the node to be INSERTED is smaller than the value of the head node. If yes, place it at the beginning and make it the head node.
  • Step 3: Find the suitable node after which the input node should be added in a loop. To discover the required node, BEGIN at the head and work your way forward until you reach a node whose value exceeds the input node. The preceding node is the correct node.
  • Step 4: After the correct node is found in step 3, insert the node.
7.

Describe the Binary Search Algorithm.

Answer»

To APPLY binary search on a list of elements, the PREREQUISITE is that the list of elements should be sorted. It is based on the Divide and Conquers Algorithmic paradigm. In the Binary Search Algorithm, we divide the search interval in half periodically to search the sorted list. We begin by creating an interval that spans the entire list. If the search key's value is less than the item in the interval's midpoint, the interval should be narrowed to the lower half. Otherwise, we limit it to the upper half of the page. We check for the value until it is discovered or the interval is empty. Given below is an algorithm describing Binary Search: (Let us assume that the element to be searched is x and the array of elements is sorted in ascending order)

  • Step 1: x should be firstly compared to the middle element.
  • Step 2: We return the middle element's index if x matches the middle element.
  • Step 3: Else If x is greater than the middle element, x can only be found after the middle element in the right half subarray since the array is sorted in the ascending order. As a result, we repeat the process for the right half.
  • Step 4: Otherwise, we repeat for the left half (x is SMALLER).
  • Step 5: If the interval is empty, we terminate the binary search.

The time complexity of the Binary Search Algorithm is O(log(n)) where n is the size of the list of elements and its space complexity is constant, that is, O(1). 

8.

Describe the Linear Search Algorithm.

Answer»

To find an element in a group of ELEMENTS, the linear search can be used. It works by traversing the list of elements from the beginning to the end and inspecting the properties of all the elements encountered along the way. Let us consider the case of an array containing some integer elements. We want to find out and print all of the elements' positions that match a particular value (also known as the "key" for the linear search). The linear search works in a flow here, matching each element with the number from the beginning to the end of the list, and then printing the element's location if the element at that position is equal to the key. 

Given below is an algorithm describing Linear Search:

  • Step 1: Using a loop, traverse the list of elements given.
  • Step 2: In each iteration, compare the target value (or key-value) to the list's current value.
  • Step 3: If the values match, print the array's current index.
  • Step 4: Move on to the NEXT array element if the values do not match.
  • Step 5: Repeat Steps 1 to 4 till the end of the list of elements is reached.

The time complexity of the Linear Search Algorithm is O(N) where n is the size of the list of elements and its space complexity is constant, that is, O(1). 

9.

What do you understand by a searching algorithm? List a few types of searching algorithms.

Answer»

Searching Algorithms are used to look for an element or get it from a data STRUCTURE (usually a list of elements). These algorithms are divided into two categories based on the type of search operation:

  • Sequential Search: This method traverses the list of elements consecutively, checking each element and reporting if the element to be SEARCHED is found. Linear Search is an example of a Sequential Search Algorithm.
  • Interval Search: These algorithms were created specifically for searching sorted data structures. Because they continually TARGET the centre of the search structure and divide the search space in HALF, these types of search algorithms are far more efficient than Sequential Search algorithms. Binary Search is an example of an Interval Search Algorithm.
10.

What do you understand about greedy algorithms? List a few examples of greedy algorithms.

Answer»

A greedy algorithm is an algorithmic method that aims to CHOOSE the best OPTIMAL decision at each sub-step, eventually leading to a globally optimal solution. This means that the algorithm chooses the best answer available at the time, regardless of the consequences. In other WORDS, when looking for an answer, an algorithm always selects the best immediate, or local, option. Greedy algorithms may identify less than perfect answers for some cases of other problems while finding the OVERALL, ideal solution for some idealistic problems.

The Greedy algorithm is used in the FOLLOWING algorithms to find their solutions:

  • Prim's Minimal Spanning Tree Algorithm
  • Kruskal's Minimal Spanning Tree Algorithm
  • Travelling Salesman Problem
  • Fractional Knapsack Problem
  • Dijkstra's Algorithm
  • Job Scheduling Problem
  • Graph  Map Coloring
  • Graph  Vertex Cover.
11.

Explain the Divide and Conquer Algorithmic Paradigm. Also list a few algorithms which use this paradigm.

Answer»

Divide and CONQUER is an algorithm paradigm, not an algorithm itself. It is set up in such a way that it can handle a large amount of data, split it down into SMALLER chunks, and determine the SOLUTION to the problem for each of the smaller chunks. It combines all of the piecewise solutions of the smaller chunks to form a single global solution. This is known as the divide and conquer technique. The Divide and Conquer algorithmic paradigm employ the steps given below:

  • Divide: The algorithm separates the original problem into a set of subproblems in this step.
  • Conquer: The algorithm solves each subproblem individually in this step.
  • Combine: In this step, the algorithm combines the solutions to the subproblems to obtain the overall solution.

Some of the ALGORITHMS which use the Divide and Conquer Algorithmic paradigm are as follows:

  • Binary Search
  • Merge Sort
  • Strassen's Matrix Multiplication
  • Quick Sort
  • Closest pair of POINTS.
12.

Write an algorithm to swap two given numbers in Java without using a temporary variable.

Answer»

It is a trick question that is frequently asked in the interviews of various companies. This PROBLEM can be solved in a variety of ways. However, while solving the problem, we must solve it without using a temporary variable, which is an essential condition. For this problem, if we can consider the possibility of integer overflow in our solution while coming up with an approach to solving it, we can make a great impression on interviewers.

Let us say that we have two integers a and b, with a's value equal to 5 and a's value equal to 6, and we want to swap them without needing a third variable. We will need to use Java programming constructs to solve this problem. Mathematical procedures such as addition, SUBTRACTION, multiplication, and division can be used to swap numbers. However, it is POSSIBLE that it will cause an integer overflow problem. 

Let us take a look at two approaches to solve this problem:

Using Addition and subtraction:

a = a + b;b = a - b; // this will act like (a+b) - b, and now b equals a.a = a - b; // this will act like (a+b) - a, and now an equals b.

It is a clever trick. However, if the addition exceeds the MAXIMUM value of the int primitive type as defined by Integer.MAX_VALUE in Java, or if the subtraction is less than the minimum value of the int primitive type as defined by Integer.MIN_VALUE in Java, there will be an integer overflow.

Using the XOR method:

Another way to swap two integers without needing a third variable (temporary variable) is using the XOR method. This is often regarded as the best approach because it works in languages that do not handle integer overflows, such as Java, C, and C++. Java has a number of bitwise operators. XOR (denoted by ^) is one of them.

x = x ^ y; y = x ^ y; x = x ^ y;
13.

What do you understand by the Asymptotic Notations?

Answer»

Asymptotic analysis is a TECHNIQUE that is used for determining the efficiency of an algorithm that does not rely on machine-specific constants and avoids the algorithm from comparing itself to the TIME-consuming approach. For asymptotic analysis, asymptotic notation is a mathematical technique that is used to indicate the TEMPORAL complexity of algorithms.

The following are the three most common asymptotic notations.

  • Big Theta Notation: (θ Notation)
    The exact asymptotic behaviour is defined using the theta (θ) Notation. It binds functions from above and below to define behaviour. Dropping low order terms and ignoring leading constants is a convenient approach to get Theta notation for an expression.
  • Big O Notation:
    The Big O notation defines an upper bound for an algorithm by bounding a function from above. Consider the situation of insertion sort: in the best case scenario, it takes linear time, and in the worst case, it takes quadratic time. Insertion sort has a time complexity O(n^2). It is useful when we just have an upper constraint on an algorithm's time complexity.
  • Big Omega (Ω) Notation:
    The Ω Notation provides an asymptotic LOWER bound on a function, just like Big O notation does. It is useful when we have a lower bound on an algorithm's time complexity.
14.

What do you understand by the best case, worst case and average case scenario of an algorithm?

Answer»

The mathematical foundation/framing of an algorithm's run time performance is defined by asymptotic analysis. We can easily determine the best case, average case, and worst-case scenarios of an algorithm using asymptotic analysis.

  • Best Case Scenario of an Algorithm: The best-case scenario for an algorithm is defined as the data arrangement in which the algorithm performs the best. Take a binary search, for EXAMPLE, where the best-case scenario is if the target value is in the very centre of the data we are looking for. The best-case scenario for binary search would have a time complexity of O(1) or constant time complexity.
  • Worst Case Scenario of an Algorithm: The worst collection of input for a given algorithm is referred to as the worst-case scenario of an Algorithm. For example, quicksort can perform poorly if the pivot value is set to the largest or smallest element of a SUBLIST. Quicksort will degenerate into an algorithm with a time complexity of O(n^2), where n is the size of the list to be SORTED.
  • Average Case Scenario of an Algorithm: The average-case complexity of an algorithm is the amount of some computational resource (usually time) used by the process, AVERAGED over all possible inputs, according to computational complexity theory. For example, the average-case complexity of the randomised quicksort algorithm is O(n*log(n)), where n is the size of the list to be sorted.
15.

How can we compare between two algorithms written for the same problem?

Answer»

The complexity of an algorithm is a technique that is used to categorise how efficient it is in comparison to other algorithms. It focuses on how the size of the data set to be processed affects execution time. In computing, the algorithm's computational complexity is CRITICAL. It is a good idea to categorise algorithms according to how much time or space they take up and to DESCRIBE how much time or space they take up as a function of input size.

  • Complexity of Time: The running time of a program as a function of the size of the input is known as time complexity.
  • Complexity of Space: Space complexity examines algorithms based on how much space they require to fulfil their tasks. In the early days of computers, space complexity analysis was crucial (when storage space on the COMPUTER was limited).

Note: Nowadays, a lack of space is rarely an issue because computer storage is plentiful. Therefore, it is mostly the Time Complexity that is given more IMPORTANCE while evaluating an Algorithm.