Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
Filter by Categories
nmims post
Objective Type Set
Online MCQ Assignment
Question Solution
Solved Question
Uncategorized

1. What is the objective of backpropagation algorithm?
a) to develop learning algorithm for multilayer feedforward neural network
b) to develop learning algorithm for single layer feedforward neural network
c) to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
d) none of the mentioned

View Answer

Answer: c [Reason:] The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.

2. The backpropagation law is also known as generalized delta rule, is it true?
a) yes
b) no

View Answer

Answer: a [Reason:] Because it fulfils the basic condition of delta rule.

3. What is true regarding backpropagation rule?
a) it is also called generalized delta rule
b) error in output is propagated backwards only to determine weight updates
c) there is no feedback of signal at nay stage
d) all of the mentioned

View Answer

Answer: d [Reason:] These all statements defines backpropagation algorithm.

4. There is feedback in final stage of backpropagation algorithm?
a) yes
b) no

View Answer

Answer: b [Reason:] No feedback is involved at any stage as it is a feedforward neural network.

5. What is true regarding backpropagation rule?
a) it is a feedback neural network
b) actual output is determined by computing the outputs of units for each hidden layer
c) hidden layers output is not all important, they are only meant for supporting input and output layers
d) none of the mentioned

View Answer

Answer: b [Reason:] In backpropagation rule, actual output is determined by computing the outputs of units for each hidden layer.

6. What is meant by generalized in statement “backpropagation is a generalized delta rule” ?
a) because delta rule can be extended to hidden layer units
b) because delta is applied to only input and output layers, thus making it more simple and generalized
c) it has no significance
d) none of the mentioned

View Answer

Answer: a [Reason:] The term generalized is used because delta rule could be extended to hidden layer units.

7. What are general limitations of back propagation rule?
a) local minima problem
b) slow convergence
c) scaling
d) all of the mentioned

View Answer

Answer: d [Reason:] These all are limitations of backpropagation algorithm in general.

8. What are the general tasks that are performed with backpropagation algorithm?
a) pattern mapping
b) function approximation
c) prediction
d) all of the mentioned

View Answer

Answer: d [Reason:] These all are the tasks that can be performed with backpropagation algorithm in general.

9. Does backpropagaion learning is based on gradient descent along error surface?
a) yes
b) no
c) cannot be said
d) it depends on gradient descent but not error surface

View Answer

Answer: a [Reason:] Weight adjustment is proportional to negative gradient of error with respect to weight.

10. How can learning process be stopped in backpropagation rule?
a) there is convergence involved
b) no heuristic criteria exist
c) on basis of average gradient value
d) none of the mentioned

View Answer

Answer: c [Reason:] If average gadient value fall below a preset threshold value, the process may be stopped.

Synopsis and Project Report

You can buy synopsis and project from distpub.com. Just visit https://distpub.com/product-category/projects/ and buy your university/institute project from distpub.com

.woocommerce-message { background-color: #98C391 !important; }