Recursive Least Squares and Accelerated Convergence in Stochastic Approximation Schemes
2001 (English)In: International journal of adaptive control and signal processing (Print), ISSN 0890-6327, E-ISSN 1099-1115, Vol. 15, no 2, 169-178 p.Article in journal (Refereed) Published
The so-called accelerated convergence is an ingenuous idea to improve the asymptotic accuracy in stochastic approximation (gradient based) algorithms. The estimates obtained from the basic algorithm are subjected to a second round of averaging, which leads to optimal accuracy for estimates of time-invariant parameters. In this contribution, some simple calculations are used to get some intuitive insight into these mechanisms. Of particular interest is to investigate the properties of accelerated convergence schemes in tracking situations. It is shown that a second round of averaging leads to the recursive least-squares algorithm with a forgetting factor. This also means that in case the true parameters are changing as a random walk, accelerated convergence does not, typically, give optimal tracking properties.
Place, publisher, year, edition, pages
2001. Vol. 15, no 2, 169-178 p.
Stochastic approximation, Recursive estimation, Recursive least squares
IdentifiersURN: urn:nbn:se:liu:diva-49317DOI: 10.1002/acs.649OAI: oai:DiVA.org:liu-49317DiVA: diva2:270213
Copyright (C) 2001 John Wiley & Sons, Ltd.2009-10-112009-10-112013-07-17