Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

Which two parameters are considered for Evaluation of a model?

Answer»

Prediction and Reality are the two parameters considered for Evaluation of a model. The “Prediction” is the output which is given by the machine and the “Reality” is the real scenario, when the prediction has been made?

2.

What is Recall? Mention its formula.

Answer»

Recall is defined as the fraction of positive cases that are correctly Identified.

Recall = \(\cfrac{ True\, Positive}{True\, positive \,+ \, False\, Negative}\)

Recall = \(\cfrac{TP}{TP\,+\,FN}\)

3.

What is Precision? Mention its formula.

Answer»

Precision is defined as the percentage of true positive cases versus all the cases where the prediction is true.

Precision = \(\cfrac{True\,Positives}{All\,Predicted\, Positives}\) x 100%

That is, it takes into account the True Positives and False Positives.

Precision = \(\cfrac{TP}{TP + FP}\) x 100%

4.

What is True Positive?

Answer»
  • The predicted value matches the actual value 
  • The actual value was positive and the model predicted a positive value
5.

What is False Negative?

Answer»
  • The predicted value was falsely predicted 
  • The actual value was positive but the model predicted a negative value 
  • Also known as the Type 2 error
6.

What is False Positive?

Answer»
  • The predicted value was falsely predicted 
  • The actual value was negative but the model predicted a positive value 
  • Also known as the Type 1 error
7.

What is True Negative?

Answer»
  • The predicted value matches the actual value 
  • The actual value was negative and the model predicted a negative value
8.

Define Evaluation.

Answer»

Moving towards deploying the model in the real world, we test it in as many ways as possible. The stage of testing the models is known as EVALUATION.

OR 

Evaluation is a process of understanding the reliability of any AI model, based on outputs by feeding the test dataset into the model and comparing it with actual answers. 

OR

Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions.

9.

What is meant by Overfitting of Data?

Answer»

Overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably".

(OR) 

An Overfitted Model is a statistical model that contains more parameters than can be justified by the data. Here, to evaluate the AI model it is not necessary to use the data that is used to build the model. Because AI Model remembers the whole training data set, therefore it always predicts the correct label for any point in the training dataset. This is known as Overfitting 

(OR) 

Models that use the training dataset during testing, will always results in correct output. This is known as overfitting.

10.

Give an example where High Accuracy is not usable.

Answer»

SCENARIO: An expensive robotic chicken crosses a very busy road a thousand times per day. An ML model evaluates traffic patterns and predicts when this chicken can safely cross the street with an accuracy of 99.99%.

Explanation: A 99.99% accuracy value on a very busy road strongly suggests that the ML model is far better than chance. In some settings, however, the cost of making even a small number of mistakes is still too high. 99.99% accuracy means that the expensive chicken will need to be replaced, on average, every 10 days. (The chicken might also cause extensive damage to cars that it hits.)

11.

What are the possible reasons for an AI model not being efficient? Explain.

Answer»

Reasons of an AI model not being efficient:

a. Lack of Training Data: If the data is not sufficient for developing an AI Model, or if the data is missed while training the model, it will not be efficient. 

b. Unauthenticated Data / Wrong Data: If the data is not authenticated and correct, then the model will not give good results. 

c. Inefficient coding / Wrong Algorithms: If the written algorithms are not correct and relevant, Model will not give desired output. Not Tested: If the model is not tested properly, then it will not be efficient. 

d. Not Easy: If it is not easy to be implemented in production or scalable. 

e. Less Accuracy: A model is not efficient if it gives less accuracy scores in production or test data or if it is not able to generalize well on unseen data. (Any three of the above can be selected)

12.

How do you suggest which evaluation metric is more important for any case?

Answer»

F1 Evaluation metric is more important in any case. F1 score sort maintains a balance between the precision and recall for the classifier. If the precision is low, the F1 is low and if the recall is low again F1 score is low

The F1 score is a number between 0 and 1 and is the harmonic mean of precision and recall

F1 Score = 2 x \(\cfrac{ Precision\, \times \,Recall}{Precision\,+\,Recall}\)

When we have a value of 1 (that is 100%) for both Precision and Recall. The F1 score would also be an ideal 1 (100%). It is known as the perfect value for F1 Score. As the values of both Precision and Recall ranges from 0 to 1, the F1 score also ranges from 0 to 1.

13.

Why is evaluation important? Explain.

Answer»

Importance of Evaluation: Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program's activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions.

  • Evaluation is important to ensure that the model is operating correctly and optimally. 
  • Evaluation is an initiative to understand how well it achieves its goals. 
  • Evaluations help to determine what works well and what could be improved in a program
14.

Give an example where High Precision is not usable.

Answer»

Example: “Predicting a mail as Spam or Not Spam”

False Positive: Mail is predicted as “spam” but it is “not spam”.

False Negative: Mail is predicted as “not spam” but it is “spam”.

Of course, too many False Negatives will make the spam filter ineffective but False Positives may cause important mails to be missed and hence Precision is not usable.