InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
In hebbian learning intial weights are set?(a) random(b) near to zero(c) near to target value(d) near to target valueThe question was posed to me at a job interview.The query is from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The CORRECT option is (b) near to zero |
|
| 2. |
Which of the following learning laws belongs to same category of learning?(a) hebbian, perceptron(b) perceptron, delta(c) hebbian, widrow-hoff(d) instar, outstarThe question was posed to me by my school teacher while I was bunking the class.The question is from Learning in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT option is (b) PERCEPTRON, delta Easy EXPLANATION: They both BELONGS to supervised type learning. |
|
| 3. |
The instar learning law can be represented by equation?(a) ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi(d) ∆wij= µ(si) ajI had been asked this question during an online exam.My query is from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct answer is (a) ∆wjk= µ(bj – wjk), where the KTH UNIT is the only active in the input layer |
|
| 4. |
Is outstar a case of supervised learning?(a) yes(b) noI got this question in an online interview.The doubt is from Learning in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right ANSWER is (a) yes |
|
| 5. |
Correlation learning law can be represented by equation?(a) ∆wij= µ(si) aj(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi(d) ∆wij= µ bi ajI had been asked this question by my school principal while I was bunking the class.Asked question is from Learning topic in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT choice is (d) ∆wij= µ bi aj Easiest EXPLANATION: Correlation learning law depends on TARGET OUTPUT(bi). |
|
| 6. |
The instar learning law can be represented by equation?(a) ∆wij= µ(si) aj(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi(d) ∆wk= µ (a-wk), unit k with maximum output is identifiedI had been asked this question in examination.My query is from Learning in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct answer is (d) ∆wk= µ (a-wk), UNIT k with MAXIMUM output is identified |
|
| 7. |
The other name for instar learning law?(a) looser take it all(b) winner take it all(c) winner give it all(d) looser give it allThe question was posed to me in a job interview.My enquiry is from Learning topic in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct ANSWER is (B) winner take it all |
|
| 8. |
Is instar a case of supervised learning?(a) yes(b) noThis question was posed to me in class test.The question is from Learning in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT choice is (b) no The best EXPLANATION: Since WEIGHT adjustment don’t depend on TARGET output, it is UNSUPERVISED learning. |
|
| 9. |
Correlation learning law is what type of learning?(a) supervised(b) unsupervised(c) either supervised or unsupervised(d) both supervised or unsupervisedI got this question in a job interview.I'm obligated to ask this question of Learning topic in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT choice is (a) SUPERVISED For explanation: Supervised, SINCE depends on target output. |
|
| 10. |
Correlation learning law is special case of?(a) Hebb learning law(b) Perceptron learning law(c) Delta learning law(d) LMS learning lawI got this question in homework.This interesting question is from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct choice is (a) HEBB learning law |
|
| 11. |
Which of the following equation represent perceptron learning law?(a) ∆wij= µ(si) aj(b) ∆wij= µ(bi – si) aj(c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi(d) ∆wij= µ(bi – (wi a)) ajThe question was asked in an interview for job.I need to ask this question from Learning in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right choice is (B) ∆wij= µ(bi – si) aj |
|
| 12. |
What’s the other name of widrow & hoff learning law?(a) Hebb(b) LMS(c) MMS(d) None of the mentionedThe question was posed to me in an online quiz.My enquiry is from Learning in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The CORRECT answer is (b) LMS |
|
| 13. |
widrow & hoff learning law is special case of?(a) hebb learning law(b) perceptron learning law(c) delta learning law(d) none of the mentionedI got this question during an online exam.I would like to ask this question from Learning in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right choice is (c) DELTA LEARNING LAW |
|
| 14. |
Delta learning is of unsupervised type?(a) yes(b) noThis question was addressed to me in quiz.The query is from Learning in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT CHOICE is (B) no To explain I would say: CHANGE in weight is based on the error between the desired & the actual OUTPUT values for a given input. |
|
| 15. |
Hebb’s law can be represented by equation?(a) ∆wij= µf(wi a)aj(b) ∆wij= µ(si) aj, where (si) is output signal of ith input(c) both way(d) none of the mentionedThis question was posed to me in a national level competition.Question is from Learning in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct option is (c) both way |
|
| 16. |
If the change in weight vector is represented by ∆wij, what does it mean?(a) describes the change in weight vector for ith processing unit, taking input vector jth into account(b) describes the change in weight vector for jth processing unit, taking input vector ith into account(c) describes the change in weight vector for jth & ith processing unit.(d) none of the mentionedI have been asked this question in a job interview.My question comes from Learning topic in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct answer is (a) describes the change in weight vector for ITH PROCESSING unit, TAKING INPUT vector jth into account |
|
| 17. |
State which of the following statements hold foe perceptron learning law?(a) it is supervised type of learning law(b) it requires desired output for each input(c) ∆wij= µ(bi – si) aj(d) all of the mentionedThe question was asked in final exam.The query is from Learning in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct option is (d) all of the mentioned |
|
| 18. |
State whether Hebb’s law is supervised learning or of unsupervised type?(a) supervised(b) unsupervised(c) either supervised or unsupervised(d) can be both supervised & unsupervisedI got this question at a job interview.Question is taken from Learning in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT choice is (B) unsupervised Explanation: No desired output is required for it’s IMPLEMENTATION. |
|
| 19. |
What is learning signal in this equation ∆wij= µf(wi a)aj?(a) µ(b) wi a(c) aj(d) f(wi a)The question was posed to me in exam.The query is from Learning in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right OPTION is (d) f(wi a) |
|
| 20. |
On what parameters can change in weight vector depend?(a) learning parameters(b) input vector(c) learning signal(d) all of the mentionedThis question was addressed to me during an online interview.My question is based upon Learning in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right answer is (d) all of the mentioned |
|
| 21. |
What LTM corresponds to?(a) activation state of network(b) encoded patterninformation pattern in synaptic weights(c) either way(d) both wayI have been asked this question during an interview.This intriguing question originated from Topology in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct choice is (b) encoded patterninformation pattern in synaptic weights |
|
| 22. |
What is STM in neural network?(a) short topology memory(b) stimulated topology memory(c) short term memory(d) none of the mentionedThe question was posed to me in exam.This interesting question is from Topology in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The CORRECT option is (C) short TERM memory |
|
| 23. |
What does STM corresponds to?(a) activation state of network(b) encoded patterninformation pattern in synaptic weights(c) either way(d) both wayI had been asked this question during a job interview.The origin of the question is Topology in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct option is (a) activation state of network |
|
| 24. |
Heteroassociative memory can be an example of which type of network?(a) group of instars(b) group of oustar(c) either group of instars or outstars(d) both group of instars or outstarsThe question was posed to me in an interview for internship.My doubt stems from Topology topic in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct answer is (C) either GROUP of instars or outstars |
|
| 25. |
If two layers coincide & weights are symmetric(wij=wji), then what is that structure called?(a) instar(b) outstar(c) autoassociative memory(d) heteroassociative memoryThe question was asked at a job interview.My question is from Topology in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right ANSWER is (c) autoassociative memory |
|
| 26. |
The operation of outstar can be viewed as?(a) content addressing the memory(b) memory addressing the content(c) either content addressing or memory addressing(d) both content & memory addressingThis question was posed to me in class test.This key question is from Topology in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT answer is (b) memory ADDRESSING the content The explanation: Because in outstar, when weight vector for connections from jth unit (say) in F2 approaches the activity PATTERN inF1(comprises of input vector). |
|
| 27. |
The operation of instar can be viewed as?(a) content addressing the memory(b) memory addressing the content(c) either content addressing or memory addressing(d) both content & memory addressingI got this question in quiz.My question comes from Topology in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT choice is (a) content addressing the memory The explanation is: Because in instar, when input is given to layer F1, the the JTH(SAY) unit of other layer F2 will be activated to maximum extent. |
|
| 28. |
What is an outstar topology?(a) when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent(b) when weight vector for connections from jth unit (say) in F2 approaches the activity pattern inF1(comprises of input vector)(c) can be either way(d) none of the mentionedThis question was addressed to me in an internship interview.I'd like to ask this question from Topology topic in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT OPTION is (b) when weight vector for connections from jth unit (say) in F2 APPROACHES the activity pattern inF1(comprises of input vector) The EXPLANATION: Restatement of basic definitionof outstar. |
|
| 29. |
What is an instar topology?(a) when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent(b) when weight vector for connections from jth unit (say) in F2 approaches the activity pattern inF1(comprises of input vector)(c) can be either way(d) none of the mentionedThis question was addressed to me during an interview.My question comes from Topology topic in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct option is (a) when INPUT is GIVEN to LAYER F1, the the jth(say) unit of other layer F2 will be activated to maximum extent |
|
| 30. |
Connections across the layers in standard topologies & among the units within a layer can be organised?(a) in feedforward manner(b) in feedback manner(c) both feedforward & feedback(d) either feedforward & feedbackI have been asked this question by my college professor while I was bunking the class.Question is taken from Topology in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct option is (d) either FEEDFORWARD & feedback |
|
| 31. |
In neural how can connectons between different layers be achieved?(a) interlayer(b) intralayer(c) both interlayer and intralayer(d) either interlayer or intralayerThe question was posed to me in final exam.I'd like to ask this question from Topology topic in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct answer is (c) both interlayer and intralayer |
|
| 32. |
what is the another name of weight update rule in adalinemodel based on its functionality?(a) LMS error learning law(b) gradient descent algorithm(c) both LMS error & gradient descent learning law(d) none of the mentionedThis question was addressed to me in an online quiz.Asked question is from Models in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct answer is (C) both LMS error & gradient descent LEARNING law |
|
| 33. |
In adaline model what is the relation between output & activation value(x)?(a) linear(b) nonlinear(c) can be either linear or non-linear(d) none of the mentionedThis question was posed to me during an online exam.The origin of the question is Models topic in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT OPTION is (a) LINEAR The EXPLANATION: s,output=f(x)=x. Hence its a linear model. |
|
| 34. |
What was the main point of difference between the adaline & perceptron model?(a) weights are compared with output(b) sensory units result is compared with output(c) analog activation value is compared with output(d) all of the mentionedThis question was posed to me at a job interview.This interesting question is from Models in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT OPTION is (c) analog activation value is compared with output The best explanation: Analog activation value comparison with output,instead of desired output as in perceptron model was the main POINT of DIFFERENCE between the adaline & perceptron model. |
|
| 35. |
who invented the adaline neural model?(a) Rosenblatt(b) Hopfield(c) Werbos(d) WidrowThe question was posed to me during a job interview.The doubt is from Models in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right choice is (d) Widrow |
|
| 36. |
If a(i) is the input, ^ is the error, n is the learning parameter, then how can weight change in a perceptron model be represented?(a) na(i)(b) n^(c) ^a(i)(d) none of the mentionedI have been asked this question in class test.This interesting question is from Models in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The CORRECT choice is (d) NONE of the mentioned |
|
| 37. |
What is adaline in neural networks?(a) adaptive linear element(b) automatic linear element(c) adaptive line element(d) none of the mentionedI had been asked this question in an international level competition.My question comes from Models in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct option is (a) ADAPTIVE LINEAR element |
|
| 38. |
What is delta (error) in perceptron model of neuron?(a) error due to environmental condition(b) difference between desired & target output(c) can be both due to difference in target output or environmental condition(d) none of the mentionedThe question was posed to me during an online exam.The query is from Models topic in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct answer is (a) error due to environmental condition |
|
| 39. |
What was the main deviation in perceptron model from that of MP model?(a) more inputs can be incorporated(b) learning enabled(c) all of the mentioned(d) none of the mentionedI have been asked this question in class test.This key question is from Models topic in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Right answer is (b) LEARNING enabled |
|
| 40. |
What was the 2nd stage in perceptron model called?(a) sensory units(b) summing unit(c) association unit(d) output unitI had been asked this question during an interview.The query is from Models in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The correct choice is (C) association unit |
|
| 41. |
Does McCulloch-pitts model have ability of learning?(a) yes(b) noThe question was asked by my college director while I was bunking the class.My question is based upon Models topic in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The CORRECT answer is (B) no |
|
| 42. |
Who invented perceptron neural networks?(a) McCullocch-pitts(b) Widrow(c) Minsky & papert(d) RosenblattI got this question by my college professor while I was bunking the class.This is a very interesting question from Models topic in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct choice is (d) Rosenblatt |
|
| 43. |
When both inputs are different, what will be the logical output of the figure of question 4?(a) 0(b) 1(c) either 0 or 1(d) zI have been asked this question in a national level competition.My question is taken from Models in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT OPTION is (a) 0 Easy EXPLANATION: Check the TRUTH table of nor gate. |
|
| 44. |
When both inputs are 1, what will be the output of the pitts model nand gate ?(a) 0(b) 1(c) either 0 or 1(d) zThis question was addressed to me in class test.My question comes from Models in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT option is (a) 0 The explanation: Check the TRUTH table of SIMPLY a nand GATE. |
|
| 45. |
Which of the following model has ability to learn?(a) pitts model(b) rosenblatt perceptron model(c) both rosenblatt and pitts model(d) neither rosenblatt nor pittsThe question was posed to me during an online interview.I'm obligated to ask this question of Models in portion Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT CHOICE is (b) rosenblatt PERCEPTRON model Explanation: Weights are fixed in pitts model but adjustable in rosenblatt. |
|
| 46. |
When both inputs are different, what will be the output of the above figure?(a) 0(b) 1(c) either 0 or 1(d) zThis question was posed to me in semester exam.Enquiry is from Models in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» Correct answer is (a) 0 |
|
| 47. |
When both inputs are 1, what will be the output of the above figure?(a) 0(b) 1(c) either 0 or 1(d) zI had been asked this question by my college professor while I was bunking the class.My question is from Models topic in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT CHOICE is (a) 0 To ELABORATE: CHECK the truth table of nor gate. |
|
| 48. |
If ‘b’ in the figure below is the bias, then what logic circuit does it represents?(a) or gate(b) and gate(c) nor gate(d) nand gateI have been asked this question by my school teacher while I was bunking the class.This intriguing question comes from Models topic in chapter Basics of Artificial Neural Networks of Neural Networks |
|
Answer» The CORRECT option is (c) nor gate |
|
| 49. |
What does the character ‘b’ represents in the above diagram?(a) bias(b) any constant value(c) a variable value(d) none of the mentionedI have been asked this question in an interview.Question is from Models in section Basics of Artificial Neural Networks of Neural Networks |
|
Answer» CORRECT option is (a) BIAS Explanation: More APPROPRIATE choice since bias is a constant FIXED value for any circuit model. |
|
| 50. |
What is nature of function F(x) in the figure?(a) linear(b) non-linear(c) can be either linear or non-linear(d) none of the mentionedI have been asked this question in quiz.I would like to ask this question from Models in division Basics of Artificial Neural Networks of Neural Networks |
|
Answer» RIGHT ANSWER is (b) non-linear Best EXPLANATION: In this FUNCTION, the independent variable is an exponent in the equation hence non-linear. |
|