Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
Filter by Categories
nmims post
Objective Type Set
Online MCQ Assignment
Question Solution
Solved Question
Uncategorized

1. On what parameters can change in weight vector depend?
a) learning parameters
b) input vector
c) learning signal
d) all of the mentioned

Answer: d [Reason:] Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.

2. If the change in weight vector is represented by ∆wij, what does it mean?
a) describes the change in weight vector for ith processing unit, taking input vector jth into account
b) describes the change in weight vector for jth processing unit, taking input vector ith into account
c) describes the change in weight vector for jth & ith processing unit.
d) none of the mentioned

Answer: a [Reason:] ∆wij= µf(wi a)aj, where a is the input vector.

3. What is learning signal in this equation ∆wij= µf(wi a)aj?
a) µ
b) wi a
c) aj
d) f(wi a)

Answer: d [Reason:] This the non linear representation of output of the network.

4. State whether Hebb’s law is supervised learning or of unsupervised type?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) can be both supervised & unsupervised

Answer: b [Reason:] No desired output is required for it’s implementation.

5. Hebb’s law can be represented by equation?
a) ∆wij= µf(wi a)aj
b) ∆wij= µ(si) aj, where (si) is output signal of ith input
c) both way
d) none of the mentioned

Answer: c [Reason:] (si)= f(wi a), in Hebb’s law.

6. State which of the following statements hold foe perceptron learning law?
a) it is supervised type of learning law
b) it requires desired output for each input
c) ∆wij= µ(bi – si) aj
d) all of the mentioned

Answer: d [Reason:] all statements follow from ∆wij= µ(bi – si) aj, where bi is the target output & hence supervised learning.

7. Delta learning is of unsupervised type?
a) yes
b) no

Answer: b [Reason:] Change in weight is based on the error between the desired & the actual output values for a given input.

8. widrow & hoff learning law is special case of?
a) hebb learning law
b) perceptron learning law
c) delta learning law
d) none of the mentioned

Answer: c [Reason:] Output function in this law is assumed to be linear , all other things same.

9. What’s the other name of widrow & hoff learning law?
a) Hebb
b) LMS
c) MMS
d) None of the mentioned

Answer: b [Reason:] LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.

10. Which of the following equation represent perceptron learning law?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(bi – (wi a)) aj