Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

How hard problem can be solved?(a) by providing additional units in a feedback network(b) nothing can be done(c) by removing units in hidden layer(d) none of the mentionedThe question was asked during an internship interview.The query is from Analysis of Pattern Storage in portion Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (a) by PROVIDING additional UNITS in a feedback network

The explanation: HARD PROBLEM can be SOLVED by providing additional units in a feedback network.

2.

Why there is error in recall, when number of energy minima is more the required number of patterns to be stored?(a) due to noise(b) due to additional false maxima(c) due to additional false minima(d) none of the mentionedThis question was addressed to me in semester exam.This key question is from Analysis of Pattern Storage topic in portion Feedforward Neural Networks of Neural Networks

Answer»

The correct choice is (C) due to additional FALSE minima

Easiest explanation: Due to additional false minima, there is ERROR in RECALL.

3.

What happens when number of available energy minima be more than number of patterns to be stored?(a) no effect(b) pattern storage is not possible in that case(c) error in recall(d) none of the mentionedThe question was posed to me in an interview for job.My doubt is from Analysis of Pattern Storage topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (c) ERROR in RECALL

For explanation I would SAY: Due to ADDITIONAL false minima, there is error in recall.

4.

What happens when number of available energy minima be less than number of patterns to be stored?(a) pattern storage is not possible in that case(b) pattern storage can be easily done(c) pattern storage problem becomes hard problem for the network(d) none of the mentionedThe question was posed to me at a job interview.I need to ask this question from Analysis of Pattern Storage topic in portion Feedforward Neural Networks of Neural Networks

Answer»

The correct option is (c) pattern storage problem becomes HARD problem for the network

Explanation: Pattern storage problem becomes hard problem, when number of ENERGY minima i.e stable STATES are LESS.

5.

The number of patterns that can be stored in a given network depends on?(a) number of units(b) strength of connecting links(c) both number of units and strength of connecting links(d) none of the mentionedThe question was posed to me in an online quiz.My question is based upon Analysis of Pattern Storage in division Feedforward Neural Networks of Neural Networks

Answer»

The correct option is (c) both number of UNITS and STRENGTH of connecting LINKS

The EXPLANATION is: The number of patterns that can be stored in a given network depends on number of units and strength of connecting links.

6.

When are stable states reached in energy landscapes, that can be used to store input patterns?(a) mean of peaks and valleys(b) maxima(c) minima(d) none of the mentionedI had been asked this question during an online interview.This interesting question is from Analysis of Pattern Storage topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

The correct choice is (c) minima

For EXPLANATION I would say: Energy minima CORRESPONDS to STABLE STATES that can be used to store input patterns.

7.

What can be done by using non – linear output function for each processing unit in a feedback network?(a) pattern classification(b) recall(c) pattern storage(d) all of the mentionedI have been asked this question in homework.This intriguing question originated from Analysis of Pattern Storage topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (c) pattern STORAGE

For explanation: By USING non – linear output function for each processing UNIT, a feedback network can be USED for pattern storage.

8.

Does linear autoassociative network have any practical use?(a) yes(b) noThe question was asked in a job interview.The above asked question is from Analysis of Pattern Storage in chapter Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (B) no

Explanation: Since if input is NOISY then OUTPUT will ASLO be noisy, hence no practical use.

9.

In a linear autoassociative network, if input is noisy than output will be noisy?(a) yes(b) noThis question was addressed to me at a job interview.My question comes from Analysis of Pattern Storage topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (a) yes

To EXPLAIN I would SAY: Linear autoassociative network GIVES out, what is GIVEN to it as input.

10.

Which is a simplest pattern recognition task in a feedback network?(a) heteroassociation(b) autoassociation(c) can be hetero or autoassociation, depends on situation(d) none of the mentionedThe question was asked in an internship interview.My question is from Analysis of Pattern Storage topic in section Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT choice is (B) autoassociation

For EXPLANATION I would say: Autoassociation is the simplest pattern recognition TASK.

11.

How can learning process be stopped in backpropagation rule?(a) there is convergence involved(b) no heuristic criteria exist(c) on basis of average gradient value(d) none of the mentionedI got this question during an internship interview.This intriguing question originated from Backpropagation Algorithm topic in division Feedforward Neural Networks of Neural Networks

Answer» RIGHT answer is (C) on basis of average gradient value

The explanation is: If average gadient value FALL below a PRESET threshold value, the PROCESS may be stopped.
12.

Does backpropagaion learning is based on gradient descent along error surface?(a) yes(b) no(c) cannot be said(d) it depends on gradient descent but not error surfaceThe question was asked in an online quiz.My enquiry is from Backpropagation Algorithm topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

Right CHOICE is (a) yes

Explanation: Weight adjustment is PROPORTIONAL to NEGATIVE GRADIENT of error with RESPECT to weight.

13.

What are the general tasks that are performed with backpropagation algorithm?(a) pattern mapping(b) function approximation(c) prediction(d) all of the mentionedI had been asked this question in class test.My question comes from Backpropagation Algorithm in section Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT ANSWER is (d) all of the mentioned

For explanation: These all are the tasks that can be PERFORMED with backpropagation algorithm in GENERAL.

14.

What are general limitations of back propagation rule?(a) local minima problem(b) slow convergence(c) scaling(d) all of the mentionedThis question was addressed to me in my homework.Question is from Backpropagation Algorithm topic in division Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT OPTION is (d) all of the mentioned

The EXPLANATION is: These all are limitations of BACKPROPAGATION algorithm in general.

15.

What is meant by generalized in statement “backpropagation is a generalized delta rule” ?(a) because delta rule can be extended to hidden layer units(b) because delta is applied to only input and output layers, thus making it more simple and generalized(c) it has no significance(d) none of the mentionedI got this question at a job interview.My doubt is from Backpropagation Algorithm in division Feedforward Neural Networks of Neural Networks

Answer»

The correct answer is (a) because delta RULE can be EXTENDED to hidden layer UNITS

To explain: The term GENERALIZED is USED because delta rule could be extended to hidden layer units.

16.

What is true regarding backpropagation rule?(a) it is a feedback neural network(b) actual output is determined by computing the outputs of units for each hidden layer(c) hidden layers output is not all important, they are only meant for supporting input and output layers(d) none of the mentionedThe question was posed to me in quiz.This intriguing question comes from Backpropagation Algorithm topic in division Feedforward Neural Networks of Neural Networks

Answer» CORRECT ANSWER is (b) actual OUTPUT is DETERMINED by computing the outputs of units for each hidden layer

To explain I would say: In backpropagation rule, actual output is determined by computing the outputs of units for each hidden layer.
17.

There is feedback in final stage of backpropagation algorithm?(a) yes(b) noThe question was posed to me in an online quiz.My question is based upon Backpropagation Algorithm topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

The correct ANSWER is (b) no

The explanation: No feedback is INVOLVED at any STAGE as it is a FEEDFORWARD neural network.

18.

What is true regarding backpropagation rule?(a) it is also called generalized delta rule(b) error in output is propagated backwards only to determine weight updates(c) there is no feedback of signal at nay stage(d) all of the mentionedI have been asked this question by my college professor while I was bunking the class.My doubt is from Backpropagation Algorithm topic in division Feedforward Neural Networks of Neural Networks

Answer»

Right answer is (d) all of the mentioned

For explanation: These all STATEMENTS DEFINES BACKPROPAGATION ALGORITHM.

19.

The backpropagation law is also known as generalized delta rule, is it true?(a) yes(b) noThe question was asked during an online exam.I'm obligated to ask this question of Backpropagation Algorithm in chapter Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT answer is (a) yes

To explain I WOULD say: Because it FULFILS the basic condition of delta RULE.

20.

What is the objective of backpropagation algorithm?(a) to develop learning algorithm for multilayer feedforward neural network(b) to develop learning algorithm for single layer feedforward neural network(c) to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly(d) none of the mentionedThe question was posed to me in an interview for job.This question is from Backpropagation Algorithm in portion Feedforward Neural Networks of Neural Networks

Answer»

The correct option is (C) to DEVELOP learning algorithm for MULTILAYER FEEDFORWARD neural network, so that network can be trained to capture the mapping implicitly

For explanation: The OBJECTIVE of backpropagation algorithmis to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.

21.

Can system be both interpolative & accretive at same time?(a) yes(b) noI have been asked this question in an internship interview.My doubt stems from Pattern Recognition topic in portion Feedforward Neural Networks of Neural Networks

Answer» CORRECT choice is (b) no

The best explanation: System can’t exhibit both behaviour at same TIME. since these are BASED on DIFFERENT approach & algorithm.
22.

Feedforward networks are also used for autoassociation & pattern storage?(a) yes(b) noThis question was posed to me at a job interview.Origin of the question is Pattern Recognition topic in portion Feedforward Neural Networks of Neural Networks

Answer»

The correct choice is (b) no

Explanation: FEEDFORWARD networks are USED for PATTERN MAPPING.

23.

Feedback networks are used for autoassociation & pattern storage?(a) yes(b) noThe question was posed to me at a job interview.I want to ask this question from Pattern Recognition topic in section Feedforward Neural Networks of Neural Networks

Answer»

Correct OPTION is (a) yes

The explanation is: FEEDBACK networks are typically used for AUTOASSOCIATION & PATTERN storage.

24.

If a(l) gives output b(l) & a’=a(l)+m,where m is small quantity & if a’ gives ouput b(l)+n then?(a) network exhibits accretive behaviour(b) network exhibits interpolative behaviour(c) exhibits both accretive & interpolative behaviour(d) none of the mentionedI have been asked this question in an online quiz.The origin of the question is Pattern Recognition in division Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (B) network exhibits interpolative behaviour

Explanation: This follows from BASIC DEFINITION in neural.

25.

If a(l) gives output b(l) & a’=a(l)+m,where m is small quantity & if a’ gives ouput b(l) then?(a) network exhibits accretive behaviour(b) network exhibits interpolative behaviour(c) exhibits both accretive & interpolative behaviour(d) none of the mentionedI had been asked this question in final exam.My doubt is from Pattern Recognition topic in section Feedforward Neural Networks of Neural Networks

Answer»

Correct option is (a) NETWORK EXHIBITS accretive behaviour

Explanation: This FOLLOWS from basic definition of accretive behaviour in NEURAL.

26.

Let a(l), b(l) represent in input-output pairs, where “l” varies in natural range of no.s, then if a(l)=!b(l)?(a) problem is heteroassociation(b) problem is autoassociation(c) can be either auto or heteroassociation(d) none of the mentionedI have been asked this question in final exam.Question is from Pattern Recognition in section Feedforward Neural Networks of Neural Networks

Answer»

Right CHOICE is (a) problem is heteroassociation

Easy explanation: When a(L) & B(l)are distinct, problem is classified as AUTOASSOCIATION.

27.

Let a(l), b(l) represent in input-output pairs, where “l” varies in natural range of no.s, then if a(l)=b(l)?(a) problem is heteroassociation(b) problem is autoassociation(c) can be either auto or heteroassociation(d) none of the mentionedI got this question in an international level competition.The query is from Pattern Recognition topic in portion Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (B) PROBLEM is autoassociation

Easiest explanation: When a(l)=b(l) problem is CLASSIFIED as autoassociation.

28.

The recalled output in pattern association problem depends on?(a) nature of input-output(b) design of network(c) both input & design(d) none of the mentionedI had been asked this question in final exam.I'd like to ask this question from Pattern Recognition topic in division Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT choice is (C) both input & design

To elaborate: The RECALLED OUTPUT in pattern association problem depends onboth input & design of network.

29.

From given input-output pairs pattern recognition model should capture characteristics of thesystem?(a) true(b) falseI have been asked this question by my school teacher while I was bunking the class.My enquiry is from Pattern Recognition topic in portion Feedforward Neural Networks of Neural Networks

Answer»

Right option is (a) true

To explain: From given input-output pairs pattern recognition MODEL should be ABLE tocapture characteristics of thesystem & HENCE should be designed in that manner.

30.

The number of units in hidden layers depends on?(a) the number of inputs(b) the number of outputs(c) both the number of inputs and outputs(d) the overall characteristics of the mapping problemI got this question in a national level competition.This key question is from Pattern Mapping topic in section Feedforward Neural Networks of Neural Networks

Answer»

The correct option is (d) the OVERALL CHARACTERISTICS of the mapping problem

The best I can explain: The number of units in HIDDEN layers depends on the overall characteristics of the mapping problem.

31.

How is hard learning problem solved?(a) using nonlinear differentiable output function for output layers(b) using nonlinear differentiable output function for hidden layers(c) using nonlinear differentiable output function for output and hidden layers(d) it cannot be solvedI got this question during an online interview.I need to ask this question from Pattern Mapping in portion Feedforward Neural Networks of Neural Networks

Answer»

Correct OPTION is (c) using NONLINEAR differentiable OUTPUT function for output and hidden LAYERS

To explain I WOULD say: Hard learning problem is solved by using nonlinear differentiable output function for output and hidden layers.

32.

Does an approximate system produce strictly an interpolated output?(a) yes(b) noI have been asked this question in a national level competition.This is a very interesting question from Pattern Mapping topic in section Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (B) no

To EXPLAIN I WOULD say: An approximate system doesn’t produce STRICTLY an interpolated output.

33.

The nature of mapping problem decides?(a) number of units in second layer(b) number of units in third layer(c) overall number of units in hidden layers(d) none of the mentionedThis question was posed to me at a job interview.My doubt stems from Pattern Mapping in division Feedforward Neural Networks of Neural Networks

Answer»

Correct ANSWER is (C) overall NUMBER of units in hidden layers

The EXPLANATION is: The NATURE of mapping problem decides overall number of units in hidden layers.

34.

What is the objective of pattern mapping problem?(a) to capture implied function(b) to capture system characteristics from observed data(c) both to implied function and system characteristics(d) none of the mentionedThe question was asked in homework.My doubt stems from Pattern Mapping topic in chapter Feedforward Neural Networks of Neural Networks

Answer» CORRECT answer is (d) none of the mentioned

To ELABORATE: The implied FUCTION is all about SYSTEM CHARACTERISTICS.
35.

Can mapping problem be a more general case of pattern classification problem?(a) yes(b) noI had been asked this question by my school teacher while I was bunking the class.My doubt stems from Pattern Mapping topic in division Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT choice is (a) yes

For explanation I would say: Since no restrictions such as linear separability is placed on the SET of INPUT – output pattern pairs, MAPPING problem becomes a more general case of pattern classification problem.

36.

To provide generalization capability to a network, what should be done?(a) all units should be linear(b) all units should be non – linear(c) except input layer, all units in other layers should be non – linear(d) none of the mentionedI have been asked this question in class test.My enquiry is from Pattern Mapping topic in portion Feedforward Neural Networks of Neural Networks

Answer»

The correct ANSWER is (c) EXCEPT input layer, all units in other layers should be non – linear

For explanation I would say: To PROVIDE generalization capability to a network, except input layer, all units in other layers should be non – linear.

37.

What is the objective of pattern mapping problem?(a) to capture weights for a link(b) to capture inputs(c) to capture feedbacks(d) to capture implied functionI have been asked this question in my homework.I would like to ask this question from Pattern Mapping topic in section Feedforward Neural Networks of Neural Networks

Answer»

The correct answer is (d) to capture IMPLIED function

To explain I would say: The objective of PATTERN MAPPING PROBLEM is to capture implied function.

38.

Can all hard problems be handled by a multilayer feedforward neural network, with nonlinear units?(a) yes(b) noI have been asked this question in an interview.The doubt is from Pattern Mapping in division Feedforward Neural Networks of Neural Networks

Answer» CORRECT answer is (a) yes

Best EXPLANATION: MULTILAYER PERCEPTRONS can DEAL with all hard problems.
39.

What is a mapping problem?(a) when no restrictions such as linear separability is placed on the set of input – output pattern pairs(b) when there may be restrictions such as linear separability placed on input – output patterns(c) when there are restriction but other than linear separability(d) none of the mentionedI had been asked this question in class test.Asked question is from Pattern Mapping in chapter Feedforward Neural Networks of Neural Networks

Answer»

Right answer is (a) when no restrictions such as linear separability is PLACED on the set of INPUTOUTPUT pattern pairs

The best explanation: Its a more general case of classification problem.

40.

If the output produces nonconvex regions, then how many layered neural is required at minimum?(a) 2(b) 3(c) 4(d) 5I had been asked this question during an interview.The query is from Pattern Classification topic in section Feedforward Neural Networks of Neural Networks

Answer»

Correct CHOICE is (c) 4

For EXPLANATION I would say: Adding ONE more layer of units to three layer can yield surfaces which can separate even nonconvex regions.

41.

Intersection of convex regions in three layer network can only produce convex surfaces, is the statement true?(a) yes(b) noThis question was posed to me in unit test.Question is taken from Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (b) no

Easy explanation: Intersection of convex REGIONS in three LAYER NETWORK can produce nonconvex regions.

42.

Intersection of linear hyperplanes in three layer network can only produce convex surfaces, is the statement true?(a) yes(b) noThe question was asked by my school principal while I was bunking the class.I would like to ask this question from Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer» RIGHT option is (a) yes

The EXPLANATION is: INTERSECTION of linear hyperplanes in THREE layer network can only produce convex surfaces.
43.

In a three layer network, number of classes is determined by?(a) number of units in second layer(b) number of units in third layer(c) number of units in second and third layer(d) none of the mentionedThis question was posed to me in my homework.This intriguing question comes from Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer»

Right option is (b) number of units in THIRD layer

Explanation: PRACTICALLY, number of units in third layer DETERMINES number of classes.

44.

As dimensionality of input vector increases, what happens to linear separability?(a) increases(b) decreases(c) no effect(d) doesn’t depend on dimensionalityThe question was asked by my college professor while I was bunking the class.This interesting question is from Pattern Classification in portion Feedforward Neural Networks of Neural Networks

Answer» RIGHT CHOICE is (b) DECREASES

The explanation is: Linear separability decreases as DIMENSIONALITY increases.
45.

In a three layer network, shape of dividing surface is determined by?(a) number of units in second layer(b) number of units in third layer(c) number of units in second and third layer(d) none of the mentionedThis question was posed to me by my college director while I was bunking the class.My query is from Pattern Classification topic in section Feedforward Neural Networks of Neural Networks

Answer»

Right option is (a) number of units in second LAYER

The explanation is: Practically, number of units in second layer DETERMINES SHAPE of DIVIDING surface.

46.

If pattern classes are linearly separable then hypersurfaces reduces to straight lines?(a) yes(b) noThis question was addressed to me in an online interview.I want to ask this question from Pattern Classification topic in portion Feedforward Neural Networks of Neural Networks

Answer»

Right answer is (a) yes

For explanation: Hypersurfaces reduces to STRAIGHT lines, if PATTERN CLASSES are LINEARLY separable.

47.

When line joining any two points in the set lies entirely in region enclosed by the set in M-dimensional space , then the set is known as?(a) convex set(b) concave set(c) may be concave or convex(d) none of the mentionedThis question was posed to me by my school principal while I was bunking the class.The origin of the question is Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer» RIGHT ANSWER is (a) CONVEX SET

To explain I would say: A convex set is a set of points in M-dimensional space such that LINE joining any two points in the set lies entirely in region enclosed by the set.
48.

Is it true that percentage of linearly separable functions will increase rapidly as dimension of input pattern space is increased?(a) yes(b) noThis question was posed to me during a job interview.This question is from Pattern Classification topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (B) no

Best explanation: There is decrease in number of linearly SEPARABLE FUNCTIONS as DIMENSION of INPUT pattern space is increased.

49.

w(m + 1) = w(m) + n(b(m) – s(m)) a(m), where b(m) is desired output, s(m) is actual output, a(m) is input vector and ‘w’ denotes weight, can this model be used for perceptron learning?(a) yes(b) noI have been asked this question in semester exam.My doubt stems from Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT OPTION is (a) yes

Easiest explanation: GRADIENT DESCENT can be used as perceptron LEARNING.

50.

If e(m) denotes error for correction of weight then what is formula for error in perceptron learning model: w(m + 1) = w(m) + n(b(m) – s(m)) a(m), where b(m) is desired output, s(m) is actual output, a(m) is input vector and ‘w’ denotes weight(a) e(m) = n(b(m) – s(m)) a(m)(b) e(m) = n(b(m) – s(m))(c) e(m) = (b(m) – s(m))(d) none of the mentionedThis question was addressed to me by my college director while I was bunking the class.This question is from Pattern Classification in portion Feedforward Neural Networks of Neural Networks

Answer»

Right answer is (C) E(m) = (b(m) – s(m))

For EXPLANATION: Error is difference between desired and actual output.