Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

Can a neural network learn & recall at the same time?(a) yes(b) noI got this question in class test.The origin of the question is Recall topic in section Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT CHOICE is (a) yes

Explanation: It was LATER PROVED by kosko in 1988.
2.

In nearest neighbour case, the stored pattern closest to input pattern is recalled, where does it occurs?(a) feedback pattern classification(b) feedforward pattern classification(c) can be feedback or feedforward(d) none of the mentionedI have been asked this question at a job interview.Origin of the question is Recall topic in chapter Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT option is (b) FEEDFORWARD pattern classification

For explanation: It is a CASE of feedforward NETWORKS.
3.

What happens during recall in neural networks?(a) weight changes are suppressed(b) input to the network determines the output activation(c) both process has to happen(d) none of thementionedThis question was posed to me during an interview.This question is from Recall in section Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct answer is (c) both process has to happen

For EXPLANATION: Follows from basic DEFINITION of RECALL in a network.

4.

What does 3rd theorem that describe the stability of a set of nonlinear dynamical systems?(a) shows the stability of fixed weight autoassociative networks(b) shows the stability of adaptive autoaassociative networks(c) shows the stability of adaptive heteroassociative networks(d) none of the mentionedI have been asked this question in an international level competition.Origin of the question is Recall topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct choice is (c) shows the STABILITY of ADAPTIVE heteroassociative NETWORKS

For explanation I WOULD say: 3RD theorem of nonlinear dynamical systems, shows the stability of adaptive heteroassociative networks.

5.

What does cohen grossberg kosko theorem?(a) shows the stability of fixed weight autoassociative networks(b) shows the stability of adaptive autoaassociative networks(c) shows the stability of adaptive heteroassociative networks(d) none of the mentionedI got this question in an international level competition.My question is from Recall topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT CHOICE is (b) shows the stability of ADAPTIVE autoaassociative NETWORKS

Explanation: Cohen grossberg kosko shows the stability of adaptive autoaassociative networks.
6.

What does cohen grossberg theorem?(a) shows the stability of fixed weight autoassociative networks(b) shows the stability of adaptive autoaassociative networks(c) shows the stability of adaptive heteroassociative networks(d) none of the mentionedThis question was posed to me in examination.Question is taken from Recall topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct choice is (a) SHOWS the STABILITY of fixed weight autoassociative NETWORKS

To explain I would say: COHEN grossberg theorem shows the stability of fixed weight autoassociative networks.

7.

V(x) is said to be lyapunov function if?(a) v(x) >=0(b) v(x)

Answer» RIGHT choice is (b) v(x) <=0

To ELABORATE: It is the condition for existence for lyapunov FUNCTION.
8.

Lyapunov function is vector in nature?(a) yes(b) noThe question was posed to me in an internship interview.My question is based upon Recall in division Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct answer is (B) no

The explanation: Lyapunov function is SCALAR in NATURE.

9.

Did existence of lyapunov function is necessary for stability?(a) yes(b) noThis question was posed to me at a job interview.The query is from Recall in division Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT option is (B) no

The best I can explain: It is sufficient but not necessary CONDITION.
10.

What’s the role of lyaopunov fuction?(a) to determine stability(b) to determine convergence(c) both stability & convergence(d) none of the mentionedI got this question during a job interview.My doubt stems from Recall topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

Right ANSWER is (a) to determine stability

The BEST explanation: lyapunov is an ENERGY function.

11.

A network will be useful only if, it leads to equilibrium state at which there is no change of state?(a) yes(b) noI have been asked this question during an interview.This key question is from Stability & Convergence in chapter Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT option is (a) yes

Easy explanation: Its the BASIC condition for STABILITY.
12.

What is an objective of a learning law?(a) to capture pattern information in training set data(b) to modify weights so as to achieve output close to desired output(c) it should lead to convergence of system or its weights(d) all of the mentionedThe question was posed to me in an international level competition.The above asked question is from Stability & Convergence topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

The CORRECT CHOICE is (d) all of the mentioned

Explanation: These all are some OBJECTIVES of LEARNING LAWS.

13.

If states of system experience basins of attraction, then system may achieve what kind of stability?(a) fixed point stability(b) oscillatory stability(c) chaotic stability(d) none of the mentionedThis question was posed to me in quiz.I'm obligated to ask this question of Stability & Convergence topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

The CORRECT CHOICE is (c) chaotic stability

Easiest explanation: Basins of ATTRACTION is a PROPERTY of chaotic stability.

14.

Is pattern storage possible if system has chaotic stability?(a) yes(b) noThis question was posed to me in final exam.Question is from Stability & Convergence in portion Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT option is (a) yes

The explanation is: PATTERN storage is POSSIBLE if any network exhibits either FIXED POINT, oscillatory, chaotic stability.
15.

If weights are not symmetric i.e cik =! cki, then what happens?(a) network may exhibit periodic oscillations of states(b) no oscillations as it doesn’t depend on it(c) system is stable(d) system in practical equilibriumI got this question in exam.The query is from Stability & Convergence in chapter Activation and Synaptic Dynamics of Neural Networks

Answer»

Correct answer is (a) network MAY exhibit PERIODIC oscillations of states

The BEST I can explain: At this SITUATION system exhibits some unwanted oscillations.

16.

How many trajectories may terminate at same equilibrium state?(a) 1(b) 2(c) many(d) noneThis question was addressed to me by my school principal while I was bunking the class.My question is taken from Stability & Convergence topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

Right OPTION is (C) many

The EXPLANATION is: There may be several trajectories that may settle to same EQUILIBRIUM STATE.

17.

Stability is minimization of error between the desired & actual outputs?(a) yes(b) noThe question was asked by my college professor while I was bunking the class.I would like to ask this question from Stability & Convergence topic in section Activation and Synaptic Dynamics of Neural Networks

Answer»

The CORRECT choice is (b) no

The BEST explanation: CONVERGENCE is minimization of ERROR between the desired & ACTUAL outputs.

18.

What leads to minimization of error between the desired & actual outputs?(a) stability(b) convergence(c) either stability or convergence(d) none of the mentionedI got this question at a job interview.I'm obligated to ask this question of Stability & Convergence topic in division Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT option is (b) CONVERGENCE

Easy explanation: Convergence is responsible for MINIMIZATION of error between the desired & ACTUAL outputs.
19.

Convergence refers to equilibrium behaviour of activation state?(a) yes(b) noI have been asked this question in an interview for internship.My question is based upon Stability & Convergence in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Correct choice is (b) no

Explanation: CONVERGENCE REFERS to ADJUSTMENT in BEHAVIOUR of weights during learning.

20.

Stability refers to adjustment in behaviour of weights during learning?(a) yes(b) noThis question was posed to me in exam.Origin of the question is Stability & Convergence in chapter Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT ANSWER is (b) no

To explain I would say: Stability refers to EQUILIBRIUM BEHAVIOUR of activation STATE.
21.

Whats true for principal component learning?(a) logical And & Or operations are used for input output relations(b) weight corresponds to minimum & maximumof units are connected(c) weights are expressed as linear combination of orthogonal basis vectors(d) change in weight uses a weighted sum of changes in past input valuesThe question was asked by my college professor while I was bunking the class.My query is from Learning Laws in section Activation and Synaptic Dynamics of Neural Networks

Answer»

Right option is (C) weights are EXPRESSED as linear combination of orthogonal BASIS vectors

To EXPLAIN: PRINCIPAL component learning involves weightsthat are expressed as linear combination of orthogonal basis vectors.

22.

Whats true for Min-max learning?(a) logical And & Or operations are used for input output relations(b) weight corresponds to minimum & maximumof units are connected(c) weights are expressed as linear combination of orthogonal basis vectors(d) change in weight uses a weighted sum of changes in past input valuesThe question was asked during an online interview.My question is taken from Learning Laws in division Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT choice is (B) WEIGHT corresponds to minimum & maximumof UNITS are CONNECTED

The explanation: Min-max learning involves weights whichcorresponds to minimum & maximumof units connected.
23.

Whats true for Drive reinforcement learning?(a) logical And & Or operations are used for input output relations(b) weight corresponds to minimum & maximumof units are connected(c) weights are expressed as linear combination of orthogonal basis vectors(d) change in weight uses a weighted sum of changes in past input valuesI had been asked this question by my college professor while I was bunking the class.My query is from Learning Laws in section Activation and Synaptic Dynamics of Neural Networks

Answer»

Correct CHOICE is (d) change in weight USES a weighted sum of changes in past input VALUES

Explanation: In Drive reinforcement learning, change in weight uses a weighted sum of changes in past input values.

24.

Whats true for sparse encoding learning?(a) logical And & Or operations are used for input output relations(b) weight corresponds to minimum & maximumof units are connected(c) weights are expressed as linear combination of orthogonal basis vectors(d) change in weight uses a weighted sum of changes in past input valuesThe question was posed to me in examination.My query is from Learning Laws in section Activation and Synaptic Dynamics of Neural Networks

Answer»

The CORRECT answer is (a) LOGICAL And & Or OPERATIONS are used for input output relations

For explanation I would say: sparse encoding LEARNING EMPLOYS Logical And & Or operations are used for input output relations.

25.

Boltzman learning uses what kind of learning?(a) deterministic(b) stochastic(c) either deterministic or stochastic(d) none of the mentionedThis question was addressed to me by my college professor while I was bunking the class.The query is from Learning Laws topic in chapter Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct ANSWER is (b) stochastic

To EXPLAIN: BOLTZMAN learning USES DETERMINISTIC learning.

26.

What is temporal credit assignment?(a) reinforcement signal given to input-output pair don’t change with time(b) input-output pair determine probability of postive reinforcement(c) input pattern depends on past history(d) none of the mentionedThe question was asked in my homework.The above asked question is from Learning Laws in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

Correct option is (c) input pattern DEPENDS on PAST history

Easy EXPLANATION: In TEMPORAL credit assignment, input pattern depends on past history.

27.

What is probablistic credit assignment?(a) reinforcement signal given to input-output pair don’t change with time(b) input-output pair determine probability of postive reinforcement(c) input pattern depends on past history(d) none of the mentionedThe question was posed to me in an interview for job.This is a very interesting question from Learning Laws in division Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct CHOICE is (b) input-output pair determine PROBABILITY of postive reinforcement

Easy EXPLANATION: In probablistic CREDIT assignment, input-output pair determine probability of postive reinforcement.

28.

What is fixed credit assignment?(a) reinforcement signal given to input-output pair don’t change with time(b) input-output pair determine probability of postive reinforcement(c) input pattern depends on past history(d) none of the mentionedThe question was posed to me during an interview.Asked question is from Learning Laws topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Correct option is (a) reinforcement signal given to input-output PAIR don’t change with TIME

Explanation: In FIXED credit ASSIGNMENT, reinforcement signal given to input-output pair don’t change with time.

29.

How many types of reinforcement learning exist?(a) 2(b) 3(c) 4(d) 5This question was posed to me by my school principal while I was bunking the class.My question is based upon Learning Laws topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Right OPTION is (B) 3

To EXPLAIN: Fixed credit assignment, probablistic credit assignment, temporal credit assignment.

30.

Reinforcement learning is also known as learning with critic?(a) yes(b) noI got this question in quiz.Query is from Learning Laws topic in division Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT option is (a) yes

For EXPLANATION: Since this is evaluative & not INSTRUCTIVE.
31.

What is reinforcement learning?(a) learning is based on evaluative signal(b) learning is based o desired output for an input(c) learning is based on both desired output & evaluative signal(d) none of the mentionedI got this question in an interview for internship.This is a very interesting question from Learning Laws in chapter Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT OPTION is (a) learning is BASED on evaluative SIGNAL

Explanation: Reinforcement learning is based on evaluative signal.
32.

Error correction learning istype of?(a) supervised learning(b) unsupervised learning(c) can be supervised or unsupervised(d) none of the mentionedThe question was asked by my school principal while I was bunking the class.This key question is from Learning Laws topic in chapter Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct answer is (a) supervised learning

The BEST I can EXPLAIN: SINCE desired output for an input is KNOWN.

33.

Error correction learning is like learning with teacher?(a) yes(b) noThe question was posed to me in an interview for job.I'm obligated to ask this question of Learning Laws topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Right ANSWER is (a) yes

To ELABORATE: Since DESIRED output for an input is known.

34.

Widrows LMS algorithm is also based on error correction learning?(a) yes(b) noI had been asked this question during an interview.My question is from Learning Laws in section Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT choice is (a) yes

Easiest explanation: It uses the instantaneous SQUARED ERROR between desired & ACTUAL output of UNIT.
35.

Continuous perceptron learning is also known as delta learning?(a) yes(b) noI had been asked this question in homework.Origin of the question is Learning Laws topic in section Activation and Synaptic Dynamics of Neural Networks

Answer»

The CORRECT answer is (a) yes

Easy EXPLANATION: Follows from basic DEFINITION of delta learning.

36.

What is error correction learning?(a) learning laws which modulate difference between synaptic weight & output signal(b) learning laws which modulate difference between synaptic weight & activation value(c) learning laws which modulate difference between actual output & desired output(d) none of the mentionedI have been asked this question in an interview for job.This interesting question is from Learning Laws topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Correct ANSWER is (c) learning LAWS which modulate DIFFERENCE between actual output & desired output

To elaborate: ERROR correction learning is base on difference between actual output & desired output.

37.

What is differential competitive learning?(a) synaptic strength is proportional to changes of post & presynaptic neuron(b) synaptic strength is proportional to changes of postsynaptic neuron only(c) synaptic strength is proportional to changes of presynaptic neuron only(d) none of the mentionedThe question was posed to me in a national level competition.This key question is from Learning Laws topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT choice is (d) NONE of the mentioned

To explain I would say: Differential competitive learning is based on to CHANGES of POSTSYNAPTIC neuron only.
38.

What is hebbian learning?(a) synaptic strength is proportional to correlation between firing of post & presynaptic neuron(b) synaptic strength is proportional to correlation between firing of postsynaptic neuron only(c) synaptic strength is proportional to correlation between firing of presynaptic neuron only(d) none of the mentionedThis question was posed to me by my college professor while I was bunking the class.My question is taken from Learning Laws in portion Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT answer is (a) synaptic strength is proportional to correlation between FIRING of post & PRESYNAPTIC neuron

Easiest explanation: FOLLLOWS from BASIC definition of hebbian learning.
39.

What is competitive learning?(a) learning laws which modulate difference between synaptic weight & output signal(b) learning laws which modulate difference between synaptic weight & activation value(c) learning laws which modulate difference between actual output & desired output(d) none of the mentionedI had been asked this question in class test.My doubt stems from Learning Laws in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct ANSWER is (a) learning laws which MODULATE difference between synaptic weight & output SIGNAL

To explain: COMPETITIVE learning laws modulate difference between synaptic weight & output signal.

40.

What is differential hebbian learning?(a) synaptic strength is proportional to correlation between firing of post & presynaptic neuron(b) synaptic strength is proportional to correlation between firing of postsynaptic neuron only(c) synaptic strength is proportional to correlation between firing of presynaptic neuron only(d) synaptic strength is proportional to changes in correlation between firing of post & presynaptic neuronThe question was asked in an internship interview.The origin of the question is Learning Laws in section Activation and Synaptic Dynamics of Neural Networks

Answer» CORRECT answer is (d) SYNAPTIC strength is proportional to changes in CORRELATION between firing of post & presynaptic neuron

Best explanation: Differential hebbian learning is proportional to changes in correlation between firing of post & presynaptic neuron.
41.

what does the term wij(0) represents in synaptic dynamic model?(a) a prioi knowledge(b) just a constant(c) no strong significance(d) future adjustmentsI had been asked this question by my school teacher while I was bunking the class.This intriguing question originated from Learning Basics in section Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT ANSWER is (a) a prioi knowledge

Easiest explanation: REFER to weight equation of synaptic DYNAMIC model.
42.

Adjustments in activation is slower than that of synaptic weights?(a) yes(b) noThis question was addressed to me during an interview.I need to ask this question from Learning Basics topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Right answer is (b) no

For explanation I would say: ADJUSTMENTS in ACTIVATION is faster than that of synaptic WEIGHTS.

43.

What is nature of input in activation dynamics?(a) static(b) dynamic(c) both static & dynamic(d) none of the mentionedI had been asked this question in an international level competition.My question is based upon Learning Basics in chapter Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct ANSWER is (a) static

To explain: INPUT is fixed THROUGHOUT the dynamics.

44.

Online learning allows network to incrementally adjust weights continuously?(a) yes(b) noThis question was addressed to me in final exam.My question is taken from Learning Basics topic in portion Activation and Synaptic Dynamics of Neural Networks

Answer»

The CORRECT OPTION is (a) yes

Easy explanation: Follows from basic DEFINITION of online learning.

45.

Learning methods can only be online?(a) yes(b) noThis question was addressed to me in unit test.My query is from Learning Basics topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

Right choice is (b) no

For explanation I WOULD SAY: LEARNING can be offline too.

46.

What is unsupervised learning?(a) weight adjustment based on deviation of desired output from actual output(b) weight adjustment based on desired output only(c) weight adjustment based on local information available to weights(d) none of the mentionedThis question was addressed to me by my school principal while I was bunking the class.The query is from Learning Basics in chapter Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct answer is (C) weight ADJUSTMENT based on local information available to weights

Easiest EXPLANATION: Unsupervised LEARNING is PURELY based on adjustment based on local information available to weights.

47.

What is structural learning?(a) concerned with capturing input-output relationship in patterns(b) concerned with capturing weight relationships(c) both weight & input-output relationships(d) none of the mentionedThe question was posed to me in my homework.My doubt stems from Learning Basics topic in division Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct option is (a) concerned with CAPTURING input-output relationship in patterns

The best explanation: Structural LEARNING deals with learning the overall STRUCTURE of network in a macroscopic VIEW.

48.

What is temporal learning?(a) concerned with capturing input-output relationship in patterns(b) concerned with capturing weight relationships(c) both weight & input-output relationships(d) none of the mentionedI have been asked this question during an interview for a job.Asked question is from Learning Basics topic in section Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT ANSWER is (b) concerned with capturing WEIGHT relationships

The BEST I can explain: Temporal learning is concerned with capturing weight relationships.
49.

Supervised learning may be used for?(a) temporal learning(b) structural learning(c) both temporal & structural learning(d) none of the mentionedThe question was asked during an interview for a job.Origin of the question is Learning Basics in chapter Activation and Synaptic Dynamics of Neural Networks

Answer»

The correct answer is (c) both temporal & structural LEARNING

For explanation: Supervised learning may be USED for both temporal & structural learning.

50.

What is supervised learning?(a) weight adjustment based on deviation of desired output from actual output(b) weight adjustment based on desired output only(c) weight adjustment based on actual output only(d) none of the mentionedThe question was posed to me by my school principal while I was bunking the class.My enquiry is from Learning Basics in chapter Activation and Synaptic Dynamics of Neural Networks

Answer» RIGHT choice is (a) weight adjustment based on deviation of DESIRED OUTPUT from actual output

Easy explanation: SUPERVISED LEARNING is based on weight adjustment based on deviation of desired output from actual output.