Overtraining, Regularization and Searching for Minimum in Neural Networks
1991 (English)Report (Other academic)
Neural network models for dynamical systems have been subject of considerable interest lately. They are often characterized by the fact that they use a fairly large amount of parameters. Here we address the problem why this can be done without the usual penalty in terms of a large variance error. We show that reguralization is a key explanation, and that terminating a gradient search ("backpropagation") before the true criterion minimum is found is a way of achieving regularization. This, among other things, also explains the concept of "overtraining" in neural nets.
Place, publisher, year, edition, pages
Linköping: Linköping University , 1991.
LiTH-ISY-I, ISSN 8765-4321 ; 1297
Neural network models, Dynamical systems, Parameters, Variance error, Regularization
IdentifiersURN: urn:nbn:se:liu:diva-55483OAI: oai:DiVA.org:liu-55483DiVA: diva2:316120