Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
Filter by Categories
nmims post
Objective Type Set
Online MCQ Assignment
Question Solution
Solved Question
Uncategorized

1. Correlation learning law is special case of?
a) Hebb learning law
b) Perceptron learning law
c) Delta learning law
d) LMS learning law

Answer: a [Reason:] Since in hebb is replaced by bi(target output) in correlation.

2. Correlation learning law is what type of learning?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) both supervised or unsupervised

Answer: a [Reason:] Supervised, since depends on target output.

3. Correlation learning law can be represented by equation?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
d) ∆wij= µ bi aj

Answer: d [Reason:] Correlation learning law depends on target output(bi).

4. The other name for instar learning law?
a) looser take it all
b) winner take it all
c) winner give it all
d) looser give it all

Answer: b [Reason:] The unit which gives maximum output, weight is adjusted for that unit.

5. The instar learning law can be represented by equation?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
d) ∆wk= µ (a-wk), unit k with maximum output is identified

Answer: d [Reason:] Follows from basic definition of instar learning law.

6. Is instar a case of supervised learning?
a) yes
b) no

Answer: b [Reason:] Since weight adjustment don’t depend on target output, it is unsupervised learning.

7. The instar learning law can be represented by equation?
a) ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(si) aj

Answer: a [Reason:] Follows from basic definition of outstar learning law.

8. Is outstar a case of supervised learning?
a) yes
b) no

Answer: a [Reason:] Since weight adjustment depend on target output, it is supervised learning.

9. Which of the following learning laws belongs to same category of learning?
a) hebbian, perceptron
b) perceptron, delta
c) hebbian, widrow-hoff
d) instar, outstar

Answer: b [Reason:] They both belongs to supervised type learning.

10. In hebbian learning intial weights are set?
a) random
b) near to zero
c) near to target value
d) near to target value