Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

101.

You are given infinite coins of denominations 3, 5, 7. Which of the following sum CANNOT be achieved using these coins?(a) 15(b) 16(c) 17(d) 4This question was posed to me in examination.My enquiry is from Coin Change Problem topic in chapter Dynamic Programming of Data Structures & Algorithms II

Answer»

Right answer is (d) 4

The EXPLANATION is: SUMS can be achieved as follows:

15 = {5,5,5}

16 = {3,3,5,5}

17 = {3,7,7}

we can’t achieve for SUM=4 because our available denominations are 3,5,7 and sum of any two denominations is greater than 4.

102.

You are given infinite coins of denominations 5, 7, 9. Which of the following sum CANNOT be achieved using these coins?(a) 50(b) 21(c) 13(d) 23This question was addressed to me in an online quiz.This interesting question is from Coin Change Problem topic in portion Dynamic Programming of Data Structures & Algorithms II

Answer»

Right CHOICE is (C) 13

Easiest explanation - ONE way to achieve a sum of 50 is to USE ten coins of 5. A sum of 21 can be achieved by using three coins of 7. One way to achieve a sum of 23 is to use two coins of 7 and one coin of 9. A sum of 13 cannot be achieved.

103.

You are given infinite coins of denominations 1, 3, 4. What is the minimum number of coins required to achieve a sum of 7?(a) 1(b) 2(c) 3(d) 4This question was addressed to me in an interview for job.My doubt stems from Coin Change Problem in portion Dynamic Programming of Data Structures & Algorithms II

Answer»

Correct option is (B) 2

The BEST explanation: A SUM of 7 can be ACHIEVED by using a minimum of two COINS {3,4}.

104.

You are given infinite coins of denominations 1, 3, 4. What is the total number of ways in which a sum of 7 can be achieved using these coins if the order of the coins is not important?(a) 4(b) 3(c) 5(d) 6I got this question in semester exam.The question is from Coin Change Problem in section Dynamic Programming of Data Structures & Algorithms II

Answer»

Correct answer is (C) 5

Explanation: A SUM of 7 can be achieved in the following WAYS:

{1,1,1,1,1,1,1}, {1,1,1,1,3}, {1,3,3}, {1,1,1,4}, {3,4}.

THEREFORE, the sum can be achieved in 5 ways.

105.

Suppose you are given infinite coins of N denominations v1, v2, v3,…..,vn and a sum S. The coin change problem is to find the minimum number of coins required to get the sum S. What is the space complexity of a dynamic programming implementation used to solve the coin change problem?(a) O(N)(b) O(S)(c) O(N^2)(d) O(S*N)I have been asked this question during an online interview.I'd like to ask this question from Coin Change Problem in section Dynamic Programming of Data Structures & Algorithms II

Answer»

The correct CHOICE is (b) O(S)

For explanation: To GET the optimal SOLUTION for a sum S, the optimal solution is found for each sum LESS than equal to S and each solution is stored. So, the space complexity is O(S).

106.

You are given infinite coins of N denominations v1, v2, v3,…..,vn and a sum S. The coin change problem is to find the minimum number of coins required to get the sum S. What is the time complexity of a dynamic programming implementation used to solve the coin change problem?(a) O(N)(b) O(S)(c) O(N^2)(d) O(S*N)I had been asked this question by my school principal while I was bunking the class.My query is from Coin Change Problem topic in chapter Dynamic Programming of Data Structures & Algorithms II

Answer» CORRECT answer is (d) O(S*N)

To EXPLAIN: The TIME complexity is O(S*N).
107.

You are given infinite coins of N denominations v1, v2, v3,…..,vn and a sum S. The coin change problem is to find the minimum number of coins required to get the sum S. What is the time complexity of a dynamic programming implementation used to solve the coin change problem?(a) O(N)(b) O(S)(c) O(N^2)(d) O(S*N)The question was posed to me in a job interview.I would like to ask this question from Coin Change Problem topic in portion Dynamic Programming of Data Structures & Algorithms II

Answer»

(d) O(S*N)

The TIME COMPLEXITY is O(S*N).

108.

Suppose you have coins of denominations 1,3 and 4. You use a greedy algorithm, in which you choose the largest denomination coin which is not greater than the remaining sum. For which of the following sums, will the algorithm produce an optimal answer?(a) 14(b) 10(c) 6(d) 100The question was asked in my homework.The query is from Coin Change Problem topic in chapter Dynamic Programming of Data Structures & Algorithms II

Answer»

Correct CHOICE is (d) 100

The best I can explain: Using the greedy ALGORITHM, three COINS {4,1,1} will be selected to make a sum of 6. But, the optimal ANSWER is two coins {3,3}. Similarly, four coins {4,4,1,1} will be selected to make a sum of 10. But, the optimal answer is three coins {4,3,3}. ALSO, five coins {4,4,4,1,1} will be selected to make a sum of 14. But, the optimal answer is four coins {4,4,3,3}. For a sum of 100, twenty-five coins {all 4’s} will be selected and the optimal answer is also twenty-five coins {all 4’s}.

109.

Suppose you have coins of denominations 1, 3 and 4. You use a greedy algorithm, in which you choose the largest denomination coin which is not greater than the remaining sum. For which of the following sums, will the algorithm NOT produce an optimal answer?(a) 20(b) 12(c) 6(d) 5I have been asked this question in an online interview.I'd like to ask this question from Coin Change Problem in division Dynamic Programming of Data Structures & Algorithms II

Answer»

The CORRECT option is (c) 6

Explanation: Using the greedy ALGORITHM, three COINS {4,1,1} will be selected to MAKE a sum of 6. But, the optimal answer is two coins {3,3}.

110.

You are given infinite coins of denominations v1, v2, v3,…..,vn and a sum S. The coin change problem is to find the minimum number of coins required to get the sum S. This problem can be solved using ____________(a) Greedy algorithm(b) Dynamic programming(c) Divide and conquer(d) BacktrackingThis question was addressed to me by my school principal while I was bunking the class.Query is from Coin Change Problem in chapter Dynamic Programming of Data Structures & Algorithms II

Answer»

The correct option is (b) Dynamic programming

Best EXPLANATION: The coin change PROBLEM has OVERLAPPING subproblems(same subproblems are solved multiple TIMES) and optimal substructure(the solution to the problem can be found by finding optimal solutions for subproblems). So, dynamic programming can be used to solve the coin change problem.

111.

Which of the following problems should be solved using dynamic programming?(a) Mergesort(b) Binary search(c) Longest common subsequence(d) QuicksortI had been asked this question by my college professor while I was bunking the class.My query is from Dynamic Programming in portion Dynamic Programming of Data Structures & Algorithms II

Answer»

The correct answer is (c) Longest common subsequence

For EXPLANATION: The longest common subsequence problem has both, optimal substructure and OVERLAPPING subproblems. Hence, DYNAMIC PROGRAMMING should be used the solve this problem.

112.

Which of the following problems is NOT solved using dynamic programming?(a) 0/1 knapsack problem(b) Matrix chain multiplication problem(c) Edit distance problem(d) Fractional knapsack problemI had been asked this question during an interview.This interesting question is from Dynamic Programming topic in division Dynamic Programming of Data Structures & Algorithms II

Answer»

The correct option is (d) Fractional KNAPSACK PROBLEM

Best explanation: The fractional knapsack problem is SOLVED using a GREEDY algorithm.

113.

When a top-down approach of dynamic programming is applied to a problem, it usually _____________(a) Decreases both, the time complexity and the space complexity(b) Decreases the time complexity and increases the space complexity(c) Increases the time complexity and decreases the space complexity(d) Increases both, the time complexity and the space complexityThis question was addressed to me by my school principal while I was bunking the class.Asked question is from Dynamic Programming in chapter Dynamic Programming of Data Structures & Algorithms II

Answer»

Right answer is (B) Decreases the time complexity and increases the SPACE complexity

The explanation is: The top-down approach uses the memoization TECHNIQUE which stores the previously CALCULATED values. Due to this, the time complexity is decreased but the space complexity is increased.

114.

In dynamic programming, the technique of storing the previously calculated values is called ___________(a) Saving value property(b) Storing value property(c) Memoization(d) MappingI had been asked this question in an interview for internship.I'd like to ask this question from Dynamic Programming topic in division Dynamic Programming of Data Structures & Algorithms II

Answer»

Correct choice is (c) Memoization

Best explanation: Memoization is the TECHNIQUE in which previously CALCULATED values are stored, so that, these values can be USED to solve other SUBPROBLEMS.

115.

A greedy algorithm can be used to solve all the dynamic programming problems.(a) True(b) FalseThis question was posed to me in class test.This interesting question is from Dynamic Programming topic in chapter Dynamic Programming of Data Structures & Algorithms II

Answer»

Correct option is (b) False

The EXPLANATION is: A GREEDY ALGORITHM gives optimal solution for all subproblems, but when these locally optimal solutions are COMBINED it may NOT result into a globally optimal solution. Hence, a greedy algorithm CANNOT be used to solve all the dynamic programming problems.

116.

When dynamic programming is applied to a problem, it takes far less time as compared to other methods that don’t take advantage of overlapping subproblems.(a) True(b) FalseI got this question in a job interview.Query is from Dynamic Programming topic in portion Dynamic Programming of Data Structures & Algorithms II

Answer»

Right choice is (a) True

The best EXPLANATION: Dynamic programming CALCULATES the value of a subproblem only once, while other methods that don’t take ADVANTAGE of the overlapping subproblems property may calculate the value of the same subproblem several times. So, dynamic programming saves the time of recalculation and takes far LESS time as COMPARED to other methods that don’t take advantage of the overlapping subproblems property.

117.

If a problem can be solved by combining optimal solutions to non-overlapping problems, the strategy is called _____________(a) Dynamic programming(b) Greedy(c) Divide and conquer(d) RecursionThis question was posed to me in an interview.My question is taken from Dynamic Programming topic in section Dynamic Programming of Data Structures & Algorithms II

Answer»

Right choice is (c) DIVIDE and conquer

Best explanation: In divide and conquer, the problem is divided into smaller non-overlapping subproblems and an optimal SOLUTION for each of the subproblems is found. The optimal solutions are then combined to GET a global optimal solution. For example, MERGESORT uses divide and conquer strategy.

118.

If a problem can be broken into subproblems which are reused several times, the problem possesses ____________ property.(a) Overlapping subproblems(b) Optimal substructure(c) Memoization(d) GreedyI had been asked this question during a job interview.Enquiry is from Dynamic Programming topic in division Dynamic Programming of Data Structures & Algorithms II

Answer»

Correct answer is (a) OVERLAPPING subproblems

The best I can explain: Overlapping subproblems is the PROPERTY in which VALUE of a subproblem is used SEVERAL TIMES.

119.

If an optimal solution can be created for a problem by constructing optimal solutions for its subproblems, the problem possesses ____________ property.(a) Overlapping subproblems(b) Optimal substructure(c) Memoization(d) GreedyI had been asked this question in an online interview.This intriguing question comes from Dynamic Programming topic in division Dynamic Programming of Data Structures & Algorithms II

Answer»

The correct OPTION is (b) Optimal substructure

Easiest explanation - Optimal substructure is the property in which an optimal solution is FOUND for the problem by CONSTRUCTING optimal SOLUTIONS for the SUBPROBLEMS.

120.

Which of the following is/are property/properties of a dynamic programming problem?(a) Optimal substructure(b) Overlapping subproblems(c) Greedy approach(d) Both optimal substructure and overlapping subproblemsI got this question in my homework.My doubt is from Dynamic Programming topic in division Dynamic Programming of Data Structures & Algorithms II

Answer» RIGHT answer is (d) Both optimal SUBSTRUCTURE and overlapping subproblems

To explain: A problem that can be solved using DYNAMIC programming possesses overlapping subproblems as WELL as optimal substructure properties.