Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

In hebbian learning intial weights are set?(a) random(b) near to zero(c) near to target value(d) near to target valueThe question was posed to me at a job interview.The query is from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The CORRECT option is (b) near to zero

To explain I WOULD say: Hebb law LEAD to sum of correlations between input & output, inorder to achieve this, the starting initial WEIGHT values must be small.

2.

Which of the following learning laws belongs to same category of learning?(a) hebbian, perceptron(b) perceptron, delta(c) hebbian, widrow-hoff(d) instar, outstarThe question was posed to me by my school teacher while I was bunking the class.The question is from Learning in division Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT option is (b) PERCEPTRON, delta

Easy EXPLANATION: They both BELONGS to supervised type learning.
3.

The instar learning law can be represented by equation?(a) ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi(d) ∆wij= µ(si) ajI had been asked this question during an online exam.My query is from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct answer is (a) ∆wjk= µ(bj – wjk), where the KTH UNIT is the only active in the input layer

Explanation: Follows from basic definition of OUTSTAR LEARNING LAW.

4.

Is outstar a case of supervised learning?(a) yes(b) noI got this question in an online interview.The doubt is from Learning in section Basics of Artificial Neural Networks of Neural Networks

Answer»

Right ANSWER is (a) yes

For explanation I would SAY: SINCE weight ADJUSTMENT depend on target output, it is supervised learning.

5.

Correlation learning law can be represented by equation?(a) ∆wij= µ(si) aj(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi(d) ∆wij= µ bi ajI had been asked this question by my school principal while I was bunking the class.Asked question is from Learning topic in section Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT choice is (d) ∆wij= µ bi aj

Easiest EXPLANATION: Correlation learning law depends on TARGET OUTPUT(bi).
6.

The instar learning law can be represented by equation?(a) ∆wij= µ(si) aj(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi(d) ∆wk= µ (a-wk), unit k with maximum output is identifiedI had been asked this question in examination.My query is from Learning in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct answer is (d) ∆wk= µ (a-wk), UNIT k with MAXIMUM output is identified

Easy explanation: Follows from BASIC definition of instar LEARNING law.

7.

The other name for instar learning law?(a) looser take it all(b) winner take it all(c) winner give it all(d) looser give it allThe question was posed to me in a job interview.My enquiry is from Learning topic in portion Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct ANSWER is (B) winner take it all

For explanation I WOULD say: The unit which GIVES maximum OUTPUT, weight is adjusted for that unit.

8.

Is instar a case of supervised learning?(a) yes(b) noThis question was posed to me in class test.The question is from Learning in section Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT choice is (b) no

The best EXPLANATION: Since WEIGHT adjustment don’t depend on TARGET output, it is UNSUPERVISED learning.
9.

Correlation learning law is what type of learning?(a) supervised(b) unsupervised(c) either supervised or unsupervised(d) both supervised or unsupervisedI got this question in a job interview.I'm obligated to ask this question of Learning topic in portion Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT choice is (a) SUPERVISED

For explanation: Supervised, SINCE depends on target output.
10.

Correlation learning law is special case of?(a) Hebb learning law(b) Perceptron learning law(c) Delta learning law(d) LMS learning lawI got this question in homework.This interesting question is from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct choice is (a) HEBB learning law

The best EXPLANATION: Since in hebb is REPLACED by bi(target OUTPUT) in correlation.

11.

Which of the following equation represent perceptron learning law?(a) ∆wij= µ(si) aj(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi(d) ∆wij= µ(bi – (wi a)) ajThe question was asked in an interview for job.I need to ask this question from Learning in section Basics of Artificial Neural Networks of Neural Networks

Answer»

Right choice is (B) ∆wij= µ(bi – si) aj

Best EXPLANATION: PERCEPTRON learning law is SUPERVISED, NONLINEAR type of learning.

12.

What’s the other name of widrow & hoff learning law?(a) Hebb(b) LMS(c) MMS(d) None of the mentionedThe question was posed to me in an online quiz.My enquiry is from Learning in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The CORRECT answer is (b) LMS

The best explanation: LMS, LEAST mean square. Change in weight is made proportional to negative gradient of error & due to linearity of OUTPUT FUNCTION.

13.

widrow & hoff learning law is special case of?(a) hebb learning law(b) perceptron learning law(c) delta learning law(d) none of the mentionedI got this question during an online exam.I would like to ask this question from Learning in portion Basics of Artificial Neural Networks of Neural Networks

Answer»

Right choice is (c) DELTA LEARNING LAW

Explanation: OUTPUT function in this law is assumed to be linear , all other THINGS same.

14.

Delta learning is of unsupervised type?(a) yes(b) noThis question was addressed to me in quiz.The query is from Learning in section Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT CHOICE is (B) no

To explain I would say: CHANGE in weight is based on the error between the desired & the actual OUTPUT values for a given input.
15.

Hebb’s law can be represented by equation?(a) ∆wij= µf(wi a)aj(b) ∆wij= µ(si) aj, where (si) is output signal of ith input(c) both way(d) none of the mentionedThis question was posed to me in a national level competition.Question is from Learning in portion Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct option is (c) both way

The EXPLANATION: (si)= f(wi a), in HEBB’s LAW.

16.

If the change in weight vector is represented by ∆wij, what does it mean?(a) describes the change in weight vector for ith processing unit, taking input vector jth into account(b) describes the change in weight vector for jth processing unit, taking input vector ith into account(c) describes the change in weight vector for jth & ith processing unit.(d) none of the mentionedI have been asked this question in a job interview.My question comes from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct answer is (a) describes the change in weight vector for ITH PROCESSING unit, TAKING INPUT vector jth into account

The explanation is: ∆wij= µf(wi a)aj, where a is the input vector.

17.

State which of the following statements hold foe perceptron learning law?(a) it is supervised type of learning law(b) it requires desired output for each input(c) ∆wij= µ(bi – si) aj(d) all of the mentionedThe question was asked in final exam.The query is from Learning in portion Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct option is (d) all of the mentioned

The best explanation: all statements follow from ∆wij= µ(bi – si) AJ, where bi is the TARGET output & hence SUPERVISED learning.

18.

State whether Hebb’s law is supervised learning or of unsupervised type?(a) supervised(b) unsupervised(c) either supervised or unsupervised(d) can be both supervised & unsupervisedI got this question at a job interview.Question is taken from Learning in division Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT choice is (B) unsupervised

Explanation: No desired output is required for it’s IMPLEMENTATION.
19.

What is learning signal in this equation ∆wij= µf(wi a)aj?(a) µ(b) wi a(c) aj(d) f(wi a)The question was posed to me in exam.The query is from Learning in portion Basics of Artificial Neural Networks of Neural Networks

Answer»

Right OPTION is (d) f(wi a)

Explanation: This the NON LINEAR representation of OUTPUT of the network.

20.

On what parameters can change in weight vector depend?(a) learning parameters(b) input vector(c) learning signal(d) all of the mentionedThis question was addressed to me during an online interview.My question is based upon Learning in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

Right answer is (d) all of the mentioned

The best explanation: Change in weight vector CORRESPONDING to jth input at time (t+1) depends on all of these PARAMETERS.

21.

What LTM corresponds to?(a) activation state of network(b) encoded patterninformation pattern in synaptic weights(c) either way(d) both wayI have been asked this question during an interview.This intriguing question originated from Topology in division Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct choice is (b) encoded patterninformation pattern in synaptic weights

Explanation: Long-term memory (LTM-the ENCODING and RETENTION of an effectively unlimited amount of INFORMATION for a much longer PERIOD of time) & hence the option.

22.

What is STM in neural network?(a) short topology memory(b) stimulated topology memory(c) short term memory(d) none of the mentionedThe question was posed to me in exam.This interesting question is from Topology in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The CORRECT option is (C) short TERM memory

For EXPLANATION I would say: Full form of STM.

23.

What does STM corresponds to?(a) activation state of network(b) encoded patterninformation pattern in synaptic weights(c) either way(d) both wayI had been asked this question during a job interview.The origin of the question is Topology in division Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct option is (a) activation state of network

Easy explanation: Short-term memory (STM) refers to the capacity-limited retention of INFORMATION over a BRIEF period of time,HENCE the option.

24.

Heteroassociative memory can be an example of which type of network?(a) group of instars(b) group of oustar(c) either group of instars or outstars(d) both group of instars or outstarsThe question was posed to me in an interview for internship.My doubt stems from Topology topic in division Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct answer is (C) either GROUP of instars or outstars

Best explanation: Depending UPON the flow, the memory can be of either of the type.

25.

If two layers coincide & weights are symmetric(wij=wji), then what is that structure called?(a) instar(b) outstar(c) autoassociative memory(d) heteroassociative memoryThe question was asked at a job interview.My question is from Topology in division Basics of Artificial Neural Networks of Neural Networks

Answer»

Right ANSWER is (c) autoassociative memory

The BEST I can explain: In autoassociative memory each UNIT is connected to EVERY other unit & to itself.

26.

The operation of outstar can be viewed as?(a) content addressing the memory(b) memory addressing the content(c) either content addressing or memory addressing(d) both content & memory addressingThis question was posed to me in class test.This key question is from Topology in division Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT answer is (b) memory ADDRESSING the content

The explanation: Because in outstar, when weight vector for connections from jth unit (say) in F2 approaches the activity PATTERN inF1(comprises of input vector).
27.

The operation of instar can be viewed as?(a) content addressing the memory(b) memory addressing the content(c) either content addressing or memory addressing(d) both content & memory addressingI got this question in quiz.My question comes from Topology in portion Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT choice is (a) content addressing the memory

The explanation is: Because in instar, when input is given to layer F1, the the JTH(SAY) unit of other layer F2 will be activated to maximum extent.
28.

What is an outstar topology?(a) when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent(b) when weight vector for connections from jth unit (say) in F2 approaches the activity pattern inF1(comprises of input vector)(c) can be either way(d) none of the mentionedThis question was addressed to me in an internship interview.I'd like to ask this question from Topology topic in division Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT OPTION is (b) when weight vector for connections from jth unit (say) in F2 APPROACHES the activity pattern inF1(comprises of input vector)

The EXPLANATION: Restatement of basic definitionof outstar.
29.

What is an instar topology?(a) when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent(b) when weight vector for connections from jth unit (say) in F2 approaches the activity pattern inF1(comprises of input vector)(c) can be either way(d) none of the mentionedThis question was addressed to me during an interview.My question comes from Topology topic in section Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct option is (a) when INPUT is GIVEN to LAYER F1, the the jth(say) unit of other layer F2 will be activated to maximum extent

Explanation: RESTATEMENT of basic definitionof instar.

30.

Connections across the layers in standard topologies & among the units within a layer can be organised?(a) in feedforward manner(b) in feedback manner(c) both feedforward & feedback(d) either feedforward & feedbackI have been asked this question by my college professor while I was bunking the class.Question is taken from Topology in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct option is (d) either FEEDFORWARD & feedback

The explanation: Connections ACROSS the layers in standard topologies can be in feedforward manner or in feedback manner but not both.

31.

In neural how can connectons between different layers be achieved?(a) interlayer(b) intralayer(c) both interlayer and intralayer(d) either interlayer or intralayerThe question was posed to me in final exam.I'd like to ask this question from Topology topic in section Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct answer is (c) both interlayer and intralayer

Easiest explanation: Connections between layers can be MADE to ONE UNIT to another and within the UNITS of a layer.

32.

what is the another name of weight update rule in adalinemodel based on its functionality?(a) LMS error learning law(b) gradient descent algorithm(c) both LMS error & gradient descent learning law(d) none of the mentionedThis question was addressed to me in an online quiz.Asked question is from Models in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct answer is (C) both LMS error & gradient descent LEARNING law

To EXPLAIN: weight UPDATE rule minimizesthe mean squared error(delta square), averaged over all inputs & this laws is derived using negative gradient of error surface weight space, hence option a & b.

33.

In adaline model what is the relation between output & activation value(x)?(a) linear(b) nonlinear(c) can be either linear or non-linear(d) none of the mentionedThis question was posed to me during an online exam.The origin of the question is Models topic in division Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT OPTION is (a) LINEAR

The EXPLANATION: s,output=f(x)=x. Hence its a linear model.
34.

What was the main point of difference between the adaline & perceptron model?(a) weights are compared with output(b) sensory units result is compared with output(c) analog activation value is compared with output(d) all of the mentionedThis question was posed to me at a job interview.This interesting question is from Models in portion Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT OPTION is (c) analog activation value is compared with output

The best explanation: Analog activation value comparison with output,instead of desired output as in perceptron model was the main POINT of DIFFERENCE between the adaline & perceptron model.
35.

who invented the adaline neural model?(a) Rosenblatt(b) Hopfield(c) Werbos(d) WidrowThe question was posed to me during a job interview.The doubt is from Models in section Basics of Artificial Neural Networks of Neural Networks

Answer»

Right choice is (d) Widrow

Easiest EXPLANATION: Widrow INVENTED the ADALINE NEURAL MODEL.

36.

If a(i) is the input, ^ is the error, n is the learning parameter, then how can weight change in a perceptron model be represented?(a) na(i)(b) n^(c) ^a(i)(d) none of the mentionedI have been asked this question in class test.This interesting question is from Models in division Basics of Artificial Neural Networks of Neural Networks

Answer»

The CORRECT choice is (d) NONE of the mentioned

For EXPLANATION I would say: The correct answer is n^a(i).

37.

What is adaline in neural networks?(a) adaptive linear element(b) automatic linear element(c) adaptive line element(d) none of the mentionedI had been asked this question in an international level competition.My question comes from Models in section Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct option is (a) ADAPTIVE LINEAR element

Easy EXPLANATION: adaptive linear element is the FULL form of adaline neural model.

38.

What is delta (error) in perceptron model of neuron?(a) error due to environmental condition(b) difference between desired & target output(c) can be both due to difference in target output or environmental condition(d) none of the mentionedThe question was posed to me during an online exam.The query is from Models topic in portion Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct answer is (a) error due to environmental condition

To explain: All other PARAMETERS are ASSUMED to be null while calculatin the error in perceptron MODEL & only difference between DESIRED & target output is TAKEN into account.

39.

What was the main deviation in perceptron model from that of MP model?(a) more inputs can be incorporated(b) learning enabled(c) all of the mentioned(d) none of the mentionedI have been asked this question in class test.This key question is from Models topic in division Basics of Artificial Neural Networks of Neural Networks

Answer»

Right answer is (b) LEARNING enabled

The EXPLANATION is: The weights in perceprton MODEL are ADJUSTABLE.

40.

What was the 2nd stage in perceptron model called?(a) sensory units(b) summing unit(c) association unit(d) output unitI had been asked this question during an interview.The query is from Models in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The correct choice is (C) association unit

For explanation I WOULD say: This was the very speciality of the perceptron model, that is PERFORMS association mapping on OUTPUTS of he SENSORY units.

41.

Does McCulloch-pitts model have ability of learning?(a) yes(b) noThe question was asked by my college director while I was bunking the class.My question is based upon Models topic in section Basics of Artificial Neural Networks of Neural Networks

Answer»

The CORRECT answer is (B) no

For explanation I WOULD say: Weights are FIXED.

42.

Who invented perceptron neural networks?(a) McCullocch-pitts(b) Widrow(c) Minsky & papert(d) RosenblattI got this question by my college professor while I was bunking the class.This is a very interesting question from Models topic in division Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct choice is (d) Rosenblatt

The explanation is: The perceptron is one of the earliest neural networks. INVENTED at the Cornell Aeronautical Laboratory in 1957 by Frank Rosenblatt, the Perceptron was an ATTEMPT to UNDERSTAND human MEMORY, learning, and cognitive processes.

43.

When both inputs are different, what will be the logical output of the figure of question 4?(a) 0(b) 1(c) either 0 or 1(d) zI have been asked this question in a national level competition.My question is taken from Models in section Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT OPTION is (a) 0

Easy EXPLANATION: Check the TRUTH table of nor gate.
44.

When both inputs are 1, what will be the output of the pitts model nand gate ?(a) 0(b) 1(c) either 0 or 1(d) zThis question was addressed to me in class test.My question comes from Models in chapter Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT option is (a) 0

The explanation: Check the TRUTH table of SIMPLY a nand GATE.
45.

Which of the following model has ability to learn?(a) pitts model(b) rosenblatt perceptron model(c) both rosenblatt and pitts model(d) neither rosenblatt nor pittsThe question was posed to me during an online interview.I'm obligated to ask this question of Models in portion Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT CHOICE is (b) rosenblatt PERCEPTRON model

Explanation: Weights are fixed in pitts model but adjustable in rosenblatt.
46.

When both inputs are different, what will be the output of the above figure?(a) 0(b) 1(c) either 0 or 1(d) zThis question was posed to me in semester exam.Enquiry is from Models in division Basics of Artificial Neural Networks of Neural Networks

Answer»

Correct answer is (a) 0

To EXPLAIN I would say: Check the TRUTH table of nor GATE.

47.

When both inputs are 1, what will be the output of the above figure?(a) 0(b) 1(c) either 0 or 1(d) zI had been asked this question by my college professor while I was bunking the class.My question is from Models topic in division Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT CHOICE is (a) 0

To ELABORATE: CHECK the truth table of nor gate.
48.

If ‘b’ in the figure below is the bias, then what logic circuit does it represents?(a) or gate(b) and gate(c) nor gate(d) nand gateI have been asked this question by my school teacher while I was bunking the class.This intriguing question comes from Models topic in chapter Basics of Artificial Neural Networks of Neural Networks

Answer»

The CORRECT option is (c) nor gate

To elaborate: FORM the truth table of above FIGURE by TAKING inputs as 0 or 1.

49.

What does the character ‘b’ represents in the above diagram?(a) bias(b) any constant value(c) a variable value(d) none of the mentionedI have been asked this question in an interview.Question is from Models in section Basics of Artificial Neural Networks of Neural Networks

Answer» CORRECT option is (a) BIAS

Explanation: More APPROPRIATE choice since bias is a constant FIXED value for any circuit model.
50.

What is nature of function F(x) in the figure?(a) linear(b) non-linear(c) can be either linear or non-linear(d) none of the mentionedI have been asked this question in quiz.I would like to ask this question from Models in division Basics of Artificial Neural Networks of Neural Networks

Answer» RIGHT ANSWER is (b) non-linear

Best EXPLANATION: In this FUNCTION, the independent variable is an exponent in the equation hence non-linear.