Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

51.

Convergence in perceptron learning takes place if and only if:(a) a minimal error condition is satisfied(b) actual output is close to desired output(c) classes are linearly separable(d) all of the mentionedThis question was addressed to me by my school teacher while I was bunking the class.Asked question is from Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer»

Right answer is (C) CLASSES are LINEARLY separable

The BEST I can explain: Linear separability of classes is the condition for CONVERGENCE of weighs in perceprton learning.

52.

Is it necessary to set initial weights in prceptron convergence theorem to zero?(a) yes(b) noThe question was asked in exam.My doubt stems from Pattern Classification topic in division Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (b) no

Easy explanation: INITIAL SETTING of weights doesn’t AFFECT PERCEPTRON convergence theorem.

53.

The perceptron convergence theorem is applicable for what kind of data?(a) binary(b) bipolar(c) both binary and bipolar(d) none of the mentionedI have been asked this question in an interview.My query is from Pattern Classification in portion Feedforward Neural Networks of Neural Networks

Answer»

Correct choice is (C) both BINARY and bipolar

Easiest explanation: The perceptron convergence theorem is APPLICABLE for both binary and bipolar input, OUTPUT data.

54.

Two classes are said to be inseparable when?(a) there may exist straight lines that doesn’t touch each other(b) there may exist straight lines that can touch each other(c) there is only one straight line that separates them(d) all of the mentionedThis question was addressed to me in an international level competition.My question comes from Pattern Classification topic in division Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (c) there is only ONE straight LINE that SEPARATES them

For explanation I would say: LINEARLY separable CLASSES, functions can be separated by a line.

55.

If two classes are linearly inseparable, can perceptron convergence theorem be applied?(a) yes(b) noThis question was posed to me in exam.I'm obligated to ask this question of Pattern Classification in chapter Feedforward Neural Networks of Neural Networks

Answer»

Correct CHOICE is (b) no

Easy EXPLANATION: Perceptron convergence theorem can only be applied, if and only if TWO CLASSSES are LINEARLY separable.

56.

When two classes can be separated by a separate line, they are known as?(a) linearly separable(b) linearly inseparable classes(c) may be separable or inseparable, it depends on system(d) none of the mentionedThe question was posed to me by my college director while I was bunking the class.Enquiry is from Pattern Classification topic in section Feedforward Neural Networks of Neural Networks

Answer» RIGHT CHOICE is (a) LINEARLY SEPARABLE

Easiest explanation: Linearly separable classes, functions can be separated by a line.
57.

On what factor the number of outputs depends?(a) distinct inputs(b) distinct classes(c) both on distinct classes & inputs(d) none of the mentionedI had been asked this question in class test.I'd like to ask this question from Pattern Classification in division Feedforward Neural Networks of Neural Networks

Answer» RIGHT choice is (b) DISTINCT CLASSES

Explanation: NUMBER of outputs depends on number of classes.
58.

In perceptron learning, what happens when input vector is correctly classified?(a) small adjustments in weight is done(b) large adjustments in weight is done(c) no adjustments in weight is done(d) weight adjustments doesn’t depend on classification of input vectorThis question was addressed to me in a national level competition.Enquiry is from Pattern Classification in section Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (c) no adjustments in WEIGHT is DONE

Best EXPLANATION: No adjustments in weight is done, since input has been CORRECTLY classified which is the objective of the SYSTEM.

59.

What is the objective of perceptron learning?(a) class identification(b) weight adjustment(c) adjust weight along with class identification(d) none of the mentionedThe question was posed to me in my homework.This intriguing question comes from Pattern Classification topic in section Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (c) ADJUST WEIGHT along with class IDENTIFICATION

Easiest EXPLANATION: The objective of PERCEPTRON learning is to adjust weight along with class identification.

60.

For noisy input vectors, Hebb methodology of learning can be employed?(a) yes(b) noThis question was posed to me during an interview.My enquiry is from Determination of Weights topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

Right OPTION is (B) no

The EXPLANATION: For noisy INPUT vectors, no SPECIFIC type of learning method exist.

61.

By using only linear processing units in output layer, can a artificial neural network capture association if input patterns is greater then dimensionality of input vectors?(a) yes(b) noThis question was posed to me in quiz.My question is based upon Determination of Weights in section Feedforward Neural Networks of Neural Networks

Answer»

The correct OPTION is (B) no

The best explanation: There is need of NON LINEAR processing UNITS.

62.

Number of output cases depends on what factor?(a) number of inputs(b) number of distinct classes(c) total number of classes(d) none of the mentionedThe question was posed to me in an interview for internship.The above asked question is from Determination of Weights in portion Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (b) NUMBER of DISTINCT classes

To explain: Number of OUTPUT cases DEPENDS on number of distinct classes.

63.

Can a artificial neural network capture association if input patterns is greater then dimensionality of input vectors?(a) yes(b) noThe question was asked in my homework.My question is from Determination of Weights in division Feedforward Neural Networks of Neural Networks

Answer» CORRECT answer is (a) yes

Easy EXPLANATION: By using NONLINEAR PROCESSING units in output layer.
64.

what are affine transformations?(a) addition of bias term (-1) which results in arbitrary rotation, scaling, translation of input pattern.(b) addition of bias term (+1) which results in arbitrary rotation, scaling, translation of input pattern.(c) addition of bias term (-1) or (+1) which results in arbitrary rotation, scaling, translation of input pattern.(d) none of the mentionedI have been asked this question during an interview.Query is from Determination of Weights topic in division Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (a) ADDITION of BIAS TERM (-1) which results in arbitrary rotation, scaling, translation of input pattern.

The best I can explain: It follows from basic DEFINITION of affine transformation.

65.

What is the features that cannot be accomplished earlier without affine transformations?(a) arbitrary rotation(b) scaling(c) translation(d) all of the mentionedI got this question during a job interview.Origin of the question is Determination of Weights in section Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (c) translation

For EXPLANATION: Affine TRANSFORMATIONS can be used to do ARBITRARY rotation, SCALING, translation.

66.

What are the features that can be accomplished using affine transformations?(a) arbitrary rotation(b) scaling(c) translation(d) all of the mentionedThis question was addressed to me in examination.My doubt stems from Determination of Weights in portion Feedforward Neural Networks of Neural Networks

Answer»

Right CHOICE is (d) all of the mentioned

Best explanation: AFFINE transformations can be USED to do arbitrary ROTATION, scaling, translation.

67.

In determination of weights by learning, for noisy input vectors what kind of learning should be employed?(a) hebb learning law(b) widrow learning law(c) hoff learning law(d) no learning lawThis question was posed to me during an interview for a job.Enquiry is from Determination of Weights topic in portion Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT answer is (d) no LEARNING LAW

Easiest explanation: For NOISY INPUT vectors, there is no learning law.

68.

In determination of weights by learning, for linear input vectors what kind of learning should be employed?(a) hebb learning law(b) widrow learning law(c) hoff learning law(d) no learning lawI got this question in an internship interview.My question is based upon Determination of Weights in portion Feedforward Neural Networks of Neural Networks

Answer» RIGHT OPTION is (b) WIDROW learning law

The EXPLANATION: For LINEAR input vectors, widrow learning law is best suited.
69.

In determination of weights by learning, for orthogonal input vectors what kind of learning should be employed?(a) hebb learning law(b) widrow learning law(c) hoff learning law(d) no learning lawThe question was posed to me in a national level competition.This question is from Determination of Weights in division Feedforward Neural Networks of Neural Networks

Answer»

Right CHOICE is (a) hebb LEARNING LAW

For EXPLANATION: For orthogonal input vectors, Hebb learning law is best suited.

70.

What is the feature that doesn’t belongs to pattern mapping in feeddorward neural networks?(a) recall is direct(b) delta rule learning(c) non linear processing units(d) two layersI have been asked this question in an online interview.This is a very interesting question from Pattern Association topic in division Feedforward Neural Networks of Neural Networks

Answer»

Correct CHOICE is (d) TWO layers

Explanation: It INVOLVES MULTIPLE layers.

71.

What is the feature that doesn’t belongs to pattern classification in feeddorward neural networks?(a) recall is direct(b) delta rule learning(c) non linear processing units(d) two layersI got this question by my school teacher while I was bunking the class.I need to ask this question from Pattern Association in chapter Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (B) delta RULE learning

To explain: It involves perceptron learning.

72.

Does pattern association involves non linear units in feedforward neural network?(a) yes(b) noI had been asked this question in a job interview.Question is taken from Pattern Association topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

The correct CHOICE is (B) no

Best explanation: There are only TWO layers & single SET of weights in pattern ASSOCIATION.

73.

What is Interpolative behaviour?(a) not a type of pattern clustering task(b) for small noise variations pattern lying closet to the desired pattern is recalled.(c) for small noise variations noisy pattern having parameter adjusted according to noise variation is recalled(d) none of the mentionedThe question was asked by my school teacher while I was bunking the class.Enquiry is from Pattern Association topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

Correct option is (c) for SMALL NOISE variations noisy pattern having PARAMETER adjusted according to noise variation is recalled

For explanation: In interpolative BEHAVIOUR, pattern having parameter adjusted according to noise variation is recalled & not the ideal ONE.

74.

Generalization feature of a multilayer feedforward network depends on factors?(a) architectural details(b) learning rate parameter(c) training samples(d) all of the mentionedThis question was addressed to me during an online interview.This question is from Pattern Association topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

The correct option is (a) architectural details

For explanation I would SAY: GENERALIZATION FEATURE of a multilayer FEEDFORWARD network DEPENDS on all of these above mentioned factors.

75.

What is generalization?(a) ability to store a pattern(b) ability to recall a pattern(c) ability to learn a mapping function(d) none of the mentionedThe question was asked during an interview.Origin of the question is Pattern Association in section Feedforward Neural Networks of Neural Networks

Answer» RIGHT CHOICE is (c) ABILITY to learn a MAPPING function

Explanation: GENERALIZATION is the ability to learn a mapping function.
76.

What is accretive behaviour?(a) not a type of pattern clustering task(b) for small noise variations pattern lying closet to the desired pattern is recalled.(c) for small noise variations noisy pattern having parameter adjusted according to noise variation is recalled(d) none of the mentionedThe question was asked in a national level competition.The origin of the question is Pattern Association topic in section Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT option is (b) for small noise VARIATIONS pattern LYING closet to the desired pattern is RECALLED.

The best I can explain: In ACCRETIVE behaviour, pattern lying closet to the desired pattern is recalled.

77.

The hard learning problem is ultimately solved by hoff’s algorithm?(a) yes(b) noThis question was addressed to me in an online interview.My enquiry is from Pattern Association in division Feedforward Neural Networks of Neural Networks

Answer»

Right OPTION is (b) no

To explain I WOULD say: The HARD learning problem is ULTIMATELY solved by backpropagation ALGORITHM.

78.

In case of autoassociationby feedback nets in pattern recognition task, what is the behaviour expected?(a) accretive(b) interpolative(c) can be either accretive or interpolative(d) none of the mentionedThe question was posed to me by my school teacher while I was bunking the class.My question comes from Pattern Association in division Feedforward Neural Networks of Neural Networks

Answer»

The correct option is (b) interpolative

For explanation I would SAY: When a noisy PATTERN is given , NETWORK RETRIEVES a noisy pattern.

79.

In order to overcome constraint of linearly separablity concept of multilayer feedforward net is proposed?(a) yes(b) noThe question was asked in examination.This intriguing question originated from Pattern Association in portion Feedforward Neural Networks of Neural Networks

Answer»

Right choice is (a) yes

To explain: MULTILAYER feedforward net with non linear PROCESSING UNITS in INTERMIDIATE HIDDEN layer is proposed.

80.

In case of pattern by feedback nets in pattern recognition task, what is the behaviour expected?(a) accretive(b) interpolative(c) can be either accretive or interpolative(d) none of the mentionedThe question was asked during an online exam.This intriguing question comes from Pattern Association in chapter Feedforward Neural Networks of Neural Networks

Answer» RIGHT answer is (a) ACCRETIVE

The explanation is: Accretive BEHAVIOUR is exhibited in case of pattern STORAGE problem.
81.

What are hard problems?(a) classification problems which are not clearly separable(b) classification problems which are not associatively separable(c) classification problems which are not functionally separable(d) none of the mentionedThis question was posed to me in my homework.My enquiry is from Pattern Association in division Feedforward Neural Networks of Neural Networks

Answer»

Correct OPTION is (d) none of the mentioned

Explanation: Classification PROBLEMS which are not linearly SEPARABLE separable are known as HARD problems.

82.

The network for pattern mapping is expected to perform?(a) pattern storage(b) pattern classification(c) genaralization(d) none of the mentionedThis question was addressed to me in class test.The above asked question is from Pattern Association topic in portion Feedforward Neural Networks of Neural Networks

Answer»

Correct choice is (C) genaralization

Explanation: The NETWORK for PATTERN MAPPING is EXPECTED to perform genaralization.

83.

If some of output patterns in pattern association problem are identical then problem shifts to?(a) pattern storage problem(b) pattern classification problem(c) pattern mapping problem(d) none of the mentionedThis question was posed to me in an interview.The origin of the question is Pattern Association in division Feedforward Neural Networks of Neural Networks

Answer»

Right ANSWER is (B) pattern classification problem

To EXPLAIN: Because then NUMBER of distinct output can be VIEWED as class labels.

84.

Feedforward network are used for pattern storage?(a) yes(b) noI had been asked this question in a national level competition.I want to ask this question from Pattern Association in division Feedforward Neural Networks of Neural Networks

Answer» RIGHT CHOICE is (b) no

Explanation: FEEDFORWARD network are used for pattern mapping, pattern association, pattern classification.
85.

Competitive learning net is used for?(a) pattern grouping(b) pattern storage(c) pattern grouping or storage(d) none of the mentionedThis question was addressed to me in an international level competition.This interesting question is from Pattern Association topic in portion Feedforward Neural Networks of Neural Networks

Answer» RIGHT ANSWER is (a) PATTERN GROUPING

To explain: Competitive LEARNING net is used for pattern grouping.
86.

Feedback connection strength are usually ?(a) fixed(b) variable(c) both fixed or variable type(d) none of the mentionedThis question was posed to me in an online quiz.This interesting question is from Pattern Association topic in portion Feedforward Neural Networks of Neural Networks

Answer» CORRECT option is (a) fixed

The explanation is: Feedback connection strength are usually fixed & linear to REDUCE COMPLEXITY.
87.

The simplest combination network is called competitive learning network?(a) yes(b) noThe question was posed to me during an interview.My question is based upon Pattern Association topic in chapter Feedforward Neural Networks of Neural Networks

Answer»

Right OPTION is (a) yes

Explanation: The most basic example of of COMBINATION of feedforward & feedback network is COMPETITIVE learning NET.

88.

Feedback networks are used for?(a) autoassociation(b) pattern storage(c) both autoassociation & pattern storage(d) none of the mentionedI had been asked this question during an interview.The origin of the question is Pattern Association topic in portion Feedforward Neural Networks of Neural Networks

Answer»

Correct answer is (c) both autoassociation & PATTERN storage

Explanation: FEEDBACK networks are used for autoassociation, pattern storage.

89.

Feedforward networks are used for?(a) pattern mapping(b) pattern association(c) pattern classification(d) all of the mentionedThe question was posed to me in homework.Question is taken from Pattern Association topic in portion Feedforward Neural Networks of Neural Networks

Answer»

The CORRECT ANSWER is (d) all of the mentioned

Easy explanation: Feedforward networks are used for PATTERN mapping, pattern ASSOCIATION, pattern CLASSIFICATION.