Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What is the space complexity of the selection sort algorithm?

Answer»

Selection sort is an in place sorting method, which implies it does not require any additional or minimal data storage. Therefore, the selection sort algorithm has a constant space complexity or O(1) space complexity.

Conclusion

So, in conclusion, we would like to convey to our readers that the Algorithm Interviews are usually the most crucial and tough interviews of all in the Recruitment process of a lot of Software COMPANIES and a SOUND understanding of Algorithms usually implies that the candidate is very good in logical thinking and has the ability to think out of the box. Algorithm interview questions can be easily solved if one has a sound understanding of Algorithms and has GONE through a lot of Algorithm Examples and Algorithm MCQs (which we will be covering in the next section of this article). Therefore, we suggest to all the budding coders of today to develop a strong GRASP on the various Algorithms that have been discovered to date so that they can ace their next Technical Interviews.

Useful Resources:

  • Data Structures and Algorithms
  • Data Structures Interview Questions
2.

What is the space complexity of the insertion sort algorithm?

Answer»

INSERTION sort is an in-place sorting method, which implies it does not require any additional or minimal data storage. In insertion sort, only a single LIST element MUST be stored outside of the starting data, RESULTING in a constant space complexity or O(1) space complexity.

3.

Describe the heap sort algorithm.

Answer»

Heap sort is a comparison-based sorting algorithm. Heapsort is SIMILAR to selection sort in that it separates its input into a sorted and an unsorted region, then successively decreases the unsorted part by taking the largest element from it and putting it into the sorted region. Unlike selection sort, heapsort does not waste time scanning the unsorted region in LINEAR time; instead, heap sort keeps the unsorted region in a heap DATA structure to identify the largest element in each step more rapidly. Let us take a look at the heap sort algorithm:

The Heapsort algorithm starts by converting the list to a max heap. The algorithm then swaps the first and last values in the list, reducing the range of values considered in the heap operation by one, and filters the new first value into its heap place. This process is repeated until the range of values considered is only one value long.

  • On the list, use the buildMaxHeap() function. This function, also known as heapify(), creates a heap from a list in O(n) operations.
  • Change the order of the list's first and last elements. Reduce the list's considered range by one.
  • To sift the new initial member to its appropriate index in the heap, use the siftDown() function on the list.
  • Unless the list's considered range is one element, proceed to step 2.

Note: The buildMaxHeap() operation runs only one time with a linear time complexity or O(n) time complexity. The siftDown() function WORKS in O(log n) time complexity, and is called n times. Therefore, the OVERALL time complexity of the heap sort algorithm is O(n + n log (n)) = O(n log n).

4.

Define tree traversal and list some of the algorithms to traverse a binary tree.

Answer»

The PROCESS of VISITING all the nodes of a TREE is known as tree TRAVERSAL.

Some of the algorithms to traverse a binary tree are as follows:

  • Pre-order Traversal.
  • In order Traversal.
  • Post order Traversal.
  • Breadth FIRST Search
  • ZigZag Traversal.
5.

Define insertion sort and selection sort.

Answer»
  • Insertion SORT: Insertion sort separates the list into sorted and unsorted sub-lists. It inserts one element at a time into the proper spot in the sorted sub-list. After insertion, the output is a sorted sub-list. It ITERATIVELY works on all the elements of an unsorted sub-list and inserts them into a sorted sub-list in order.
  • Selection sort: Selection sort is an in-PLACE sorting technique. It separates the DATA collection into sorted and unsorted sub-lists. The minimum element from the unsorted sub-list is then selected and placed in the sorted list. This loops until all of the elements in the unsorted sub-list have been consumed by the sorted sub-list.

Note: Both sorting STRATEGIES keep two sub-lists, sorted and unsorted, and place one element at a time into the sorted sub-list. Insertion sort takes the currently selected element and places it in the sorted array at the right point while keeping the insertion sort attributes. Selection sort, on the other hand, looks for the smallest element in an unsorted sub-list and replaces it with the current element.

6.

Devise an algorithm to insert a node in a Binary Search Tree.

Answer»

An algorithm to insert a node in a Binary Search Tree is given below:

  • Assign the current node to the ROOT node.
  • If the root node's value is GREATER than the value that has to be added:
    • If the root node has a LEFT CHILD, GO to the left.
    • Insert node here if it does not have a left child.
  • If the root node's value is less than the value that has to be added:
    • If the root node has a right child, go to the right.
    • Insert node here if it does not have the right child.
7.

What are recursive algorithms? State the important rules which every recursive algorithm must follow.

Answer»

Recursive ALGORITHM is a way of tackling a difficult problem by breaking it down into smaller and smaller subproblems until the problem is small enough to be solved quickly. It usually involves a function that calls itself (PROPERTY of recursive functions).

The THREE laws which must be followed by all recursive ALGORITHMS are as follows:

  • There should be a base case.
  • It is NECESSARY for a recursive algorithm to call itself.
  • The state of a recursive algorithm must be changed in order for it to return to the base case.
8.

Can we use the binary search algorithm for linked lists? Justify your answer.

Answer»

No, we cannot use the binary search algorithm for LINKED lists. 
EXPLANATION: Because random ACCESS is not allowed in linked lists, reaching the MIDDLE element in constant or O(1) time is impossible. As a result, the usage of a binary search algorithm on a linked list is not possible.

9.

Explain the Dijkstra's Algorithm to find the shortest path between a given node in a graph to any other node in the graph.

Answer»

Dijkstra's algorithm is a method for determining the shortest pathways between nodes in a graph, which might be used to depict road networks. Edsger W. Dijkstra, a computer scientist, conceived it in 1956 and published it three years later. There are NUMEROUS variations of the algorithm. The original Dijkstra algorithm discovered the shortest path between two nodes, but a more frequent form fixes a single node as the "source" node and finds the shortest pathways from the source to all other nodes in the network, resulting in a shortest-path tree. Let us take a look at Dijkstra's Algorithm to find the shortest path between a given node in a graph to any other node in the graph:

Let us call the node where we are starting the process as the initial node. Let the distance from the initial node to Y be the distance of node Y. Dijkstra's algorithm will begin with unlimited distances and attempt to improve them incrementally.

  • Step 1: Mark all nodes that have not been visited yet. The unvisited set is a collection of all the nodes that have not been visited yet.
  • Step 2: Assign a tentative distance value to each node: set it to zero for our first node and infinity for all others. The length of the shortest path discovered so far between the node v and the initial node is the tentative distance of a node v. Because no other vertex other than the source (which is a path of length zero) is known at the start, all other tentative distances are set to infinity. Set the CURRENT node to the beginning node.
  • Step 3: Consider all of the current node's unvisited neighbours and determine their approximate distances through the current node. Compare the newly calculated tentative distance to the current assigned value and choose the one that is less. If the present node A has a distance of 5 and the edge linking it to a neighbour B has a length of 3, the distance to B through A will be 5 +3 = 8. Change B to 8 if it was previously marked with a distance greater than 8. If this is not the case, the current value will be retained.
  • Step 4: Mark the current node as visited and remove it from the unvisited set once we have considered all of the current node's unvisited neighbours. A node that has been visited will never be checked again.
  • Stop if the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance between the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and the remaining unvisited nodes). The algorithm is now complete.
  • Step 5: Otherwise, RETURN to step 3 and select the unvisited node indicated with the shortest tentative distance as the new current node.

It is not required to wait until the target node is "visited" as described above while constructing a route: the algorithm can end once the destination node has the least tentative distance AMONG all "unvisited" nodes (and thus could be selected as the next "current"). For arbitrary directed GRAPHS with unbounded non-negative weights, Dijkstra's algorithm is asymptotically the fastest known single-source shortest path algorithm with time complexity of O(|E| + |V|log(|V|)), where  |V| is the number of nodes and|E| is the number of edges in the graph.

10.

Write an algorithm to find the maximum subarray sum for a given array. In other words, find the maximum sum that can be achieved by taking contiguous elements from a given array of integers.

Answer»

Kadane's algorithm can be USED to find the maximum subarray sum for a given array.  From left to right, Kadane's algorithm searches the provided array. It then computes the subarray with the largest sum ENDING at position j in the jth step, and this sum is stored in the variable "currentSum". Furthermore, it computes the subarray with the biggest sum anywhere in the subarray starting from the first position to the jth position, that is, in A[1...j], and stores it in the variable "bestSum". This is done by taking the maximum value of the variable "currentSum" till now and then storing it in the variable "bestSum".  In the end, the value of "bestSum" is returned as the final answer to our problem.

Formally, Kadane's algorithm can be stated as follows:

  • Step 1: Initialize the following variables:
    • bestSum = INT_MIN
    • currentSum = 0 // for empty subarray, it is initialized as value 0
  • Step 2: Loop for each element of the array A
    • (a) currentSum  = currentSum  + A[i]
    • (b) if(bestSum < currentSum)
                 bestSum = currentSum 
    • (c) if(currentSum  < 0)
                 currentSum = 0
  • Step 3: return bestSum
11.

Describe the bubble sort algorithm with the help of an example.

Answer»

Bubble sort, also known as sinking sort, is a basic sorting algorithm that iterates through a list, COMPARING neighbouring elements and swapping them if they are out of order. The list is sent through again and again until it is SORTED. The comparison sort method is named from the manner that smaller or larger components "bubble" to the top of the list. This SIMPLISTIC method performs badly in real-world situations and is mostly used as a teaching aid. Let us take an example to understand how bubble sort works:

Let us assume that the array to be sorted is (50 10 40 20 80). The various passes or rounds of bubble sort are given below:

  • First Pass:
    • (50 10 40 20 80) –> ( 10 50 40 20 80 ), Since 50 > 10, the algorithm compares the first two elements and swaps them.
    • ( 10 50 40 20 80 ) –>  ( 10 40 50 20 80 ), Since 50 > 40, the algorithm swaps the VALUES at the second and third positions.
    • (10 40 50 20 80) –> (10 40 20 50 80), Since 50 > 3, the algorithm swaps the third and fourth elements.
    • (10 40 20 50 80) -> ( 10 40 20 50 80 ), The method does not swap the fourth and fifth elements because they are already in order (80 > 50).
  • Second Pass:
    • ( 10 40 20 50 80 ) –> ( 10 40 20 50 80 ) , Elements at first and second POSITION are in order so now swapping.
    • ( 10 40 20 50 80 ) –> ( 10 20 40 50 80 ), Since 40 > 20, the algorithm swaps the values at the second and third positions.
    • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth position are in order so now swapping.
    • ( 10 20 40 50 80 ) –>  ( 10 20 40 50 80 ), Elements at fourth and fifth position are in order so now swapping.

The array is now sorted, but our algorithm is unsure whether it is complete. To know if the algorithm is sorted, it must complete one complete pass without any swaps.

  • Third Pass: 
    • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the first and second position are in order so now swapping. 
    • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the second and third position are in order so now swapping. 
    • ( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth position are in order so now swapping. 
    • ( 10 20 40 50 80 ) –> ( 10 20 40 5 80 ), Elements at the fourth and fifth position are in order so now swapping.
12.

Describe the quick sort algorithm.

Answer»

Quicksort is a sorting algorithm that is in place (in-place algorithm is an algorithm that transforms input using no auxiliary data structure). It was created by the British computer scientist Tony Hoare in 1959 and was published in 1961, and it is still a popular sorting algorithm. It can be somewhat quicker than MERGE sort and two or three times faster than heapsort when properly done. 

Quicksort is based on the divide and conquer algorithmic paradigm. It operates by picking a 'pivot' element from the array and separating the other elements into two subarrays based on whether they are greater or less than the pivot. As a result, it is also known as partition exchange sort. The subarrays are then recursively sorted. This can be done in place, with only a little amount of additional RAM (Random ACCESS Memory) required for sorting. 

Quicksort is a comparison sorting algorithm, which means it can sort objects of any type that have a "less-than" relation (technically, a total order) declared for them. Quicksort is not a stable sort, which means that the relative order of equal sort items is not retained in efficient implementations. Quicksort (like the partition METHOD) must be written in such a way that it can be called for a range within a bigger array, even if the end purpose is to sort the entire array, due to its recursive nature. 

The following are the steps for in-place quicksort:

  • If there are less than two elements in the range, return immediately because there is nothing else to do. A special-purpose sorting algorithm may be used for other very small lengths, and the rest of these stages may be avoided.
  • Otherwise, choose a pivot value, which is a value that occurs in the range (the PRECISE manner of choice depends on the partition routine, and can involve randomness).
  • Partition the range by reordering its elements while determining a point of division so that all elements with values less than the pivot appear before the division and all elements with values greater than the pivot appear after it; elements with values equal to the pivot can appear in either direction. Most partition procedures ensure that the value that ends up at the point of division is equal to the pivot, and is now in its ultimate location because at least one instance of the pivot is present (but termination of quicksort does not depend on this, as long as sub-ranges strictly smaller than the original are produced).
  • Apply the quicksort recursively to the sub-range up to the point of division and the sub-range after it, optionally removing the element equal to the pivot at the point of division from both ranges. (If the partition creates a potentially bigger sub-range near the boundary with all elements known to be equal to the pivot, these can also be omitted.)

Quicksort's mathematical analysis reveals that, on average, it takes O(nlog (n) time complexity to sort n items. In the worst-case scenario, it performs in time complexity of O(n^2).

Note: The algorithm's performance can be influenced by the partition routine (including the pivot selection) and other details not fully defined above, possibly to a large extent for specific input arrays. It is therefore crucial to define these alternatives before discussing quicksort's efficiency. 

13.

Describe the merge sort algorithm.

Answer»

Merge sort (ALSO known as MERGESORT) is a general-purpose, comparison-based sorting algorithm developed in computer science. The majority of its implementations result in a stable sort, which indicates that the order of equal elements in the input and output is the same. In 1945, John von Neumann DEVISED the merge sort method, which is a divide and conquer algorithm. The following is how a merge sort WORKS conceptually:

  • Separate the unsorted list into n sublists, each with one element (a list of one element is considered sorted).
  • Merge sublists repeatedly to create new sorted sublists until only one SUBLIST remains. The sorted list will be displayed then.

The time complexity of the Merge Sort Algorithm is O(nlog(n)) where n is the size of the list of the elements to be sorted while the space complexity of the Merge Sort Algorithm is O(n), that is, linear space complexity.

14.

What are few of the most widely used cryptographic algorithms?

Answer»

A few of the most WIDELY used cryptographic ALGORITHMS are as follows:

  • IDEA
  • CAST
  • CMEA
  • 3-way
  • Blowfish
  • GOST
  • LOKI
  • DES and TRIPLE DES.
15.

How do the encryption algorithms work?

Answer»

e process of transforming plaintext into a secret code format KNOWN as "Ciphertext'' is known as encryption. For calculations, this technique uses a string of bits known as "keys" to convert the text. The larger the key, the more POTENTIAL PATTERNS for producing ciphertext there are. The MAJORITY of encryption algorithms use fixed blocks of INPUT with lengths ranging from 64 to 128 bits, while others use the stream technique.