liu.seSearch for publications in DiVA
Change search
ReferencesLink to record
Permanent link

Direct link
An Efficient Stochastic Approximation EM Algorithm using Conditional Particle Filters
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
2013 (English)In: Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing, IEEE conference proceedings, 2013, 6274-6278 p.Conference paper (Refereed)
Abstract [en]

I present a novel method for maximum likelihood parameter estimation in nonlinear/non-Gaussian state-space models. It is an expectation maximization (EM) like method, which uses sequential Monte Carlo (SMC) for the intermediate state inference problem. Contrary to existing SMC-based EM algorithms, however, it makes efficient use of the simulated particles through the use of particle Markov chain Monte Carlo (PMCMC) theory. More precisely, the proposed method combines the efficient conditional particle filter with ancestor sampling (CPF-AS) with the stochastic approximation EM (SAEM) algorithm. This results in a procedure which does not rely on asymptotics in the number of particles for convergence, meaning that the method is very computationally competitive. Indeed, the method is evaluated in a simulation study, using a small number of particles with promising results.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2013. 6274-6278 p.
Keyword [en]
Maximum likelihood, Stochastic approximation, Conditional particle filter
National Category
Control Engineering Signal Processing
URN: urn:nbn:se:liu:diva-93459DOI: 10.1109/ICASSP.2013.6638872ISI: 000329611506087OAI: diva2:624904
38th International Conference on Acoustics, Speech, and Signal Processing, Vancouver, Canada, 26-31 May, 2013
Swedish Research Council, 621-2010-5876
Available from: 2013-06-03 Created: 2013-06-03 Last updated: 2014-02-20
In thesis
1. Particle filters and Markov chains for learning of dynamical systems
Open this publication in new window or tab >>Particle filters and Markov chains for learning of dynamical systems
2013 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2013. 42 p.
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1530
Bayesian learning, System identification, Sequential Monte Carlo, Markov chain Monte Carlo, Particle MCMC, Particle filters, Particle smoothers
National Category
Control Engineering Probability Theory and Statistics
urn:nbn:se:liu:diva-97692 (URN)10.3384/diss.diva-97692 (DOI)978-91-7519-559-9 (print) (ISBN)
Public defence
2013-10-25, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 10:15 (English)
Swedish Research Council
Available from: 2013-10-08 Created: 2013-09-19 Last updated: 2013-10-08Bibliographically approved

Open Access in DiVA

fulltext(782 kB)578 downloads
File information
File name FULLTEXT01.pdfFile size 782 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Lindsten, Fredrik
By organisation
Automatic ControlThe Institute of Technology
Control EngineeringSignal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 578 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 267 hits
ReferencesLink to record
Permanent link

Direct link