liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Backward simulation methods for Monte Carlo statistical inference
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
2013 (English)In: Foundations and Trends in Machine Learning, ISSN 1935-8237, Vol. 6, no 1, p. 1-143Article in journal (Refereed) Published
Abstract [en]

Monte Carlo methods, in particular those based on Markov chains and on interacting particle systems, are by now tools that are routinely used in machine learning. These methods have had a profound impact on statistical inference in a wide range of application areas where probabilistic models are used. Moreover, there are many algorithms in machine learning which are based on the idea of processing the data sequentially, first in the forward direction and then in the backward direction. In this tutorial we will review a branch of Monte Carlo methods based on the forward-backward idea, referred to as backward simulators. These methods are useful for learning and inference in probabilistic models containing latent stochastic processes. The theory and practice of backward simulation algorithms have undergone a significant development in recent years and the algorithms keep finding new applications. The foundation for these methods is sequential Monte Carlo (SMC). SMC-based backward simulators are capable of addressing smoothing problems in sequential latent variable models, such as general, nonlinear/non-Gaussian state-space models (SSMs). However, we will also clearly show that the underlying backward simulation idea is by no means restricted to SSMs. Furthermore, backward simulation plays an important role in recent developments of Markov chain Monte Carlo (MCMC) methods. Particle MCMC is a systematic way of using SMC within MCMC. In this framework, backward simulation gives us a way to significantly improve the performance of the samplers. We review and discuss several related backward-simulation-based methods for state inference as well as learning of static parameters, both using a frequentistic and a Bayesian approach.

Place, publisher, year, edition, pages
2013. Vol. 6, no 1, p. 1-143
Keywords [en]
Bayesian learning, Markov chain Monte Carlo, Nonlinear signal processing, Particle smoothing, Sequential Monte Carlo
National Category
Control Engineering Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:liu:diva-98294DOI: 10.1561/2200000045OAI: oai:DiVA.org:liu-98294DiVA, id: diva2:654562
Projects
CNDMCADICS
Funder
Swedish Research CouncilAvailable from: 2013-10-07 Created: 2013-10-07 Last updated: 2013-10-08
In thesis
1. Particle filters and Markov chains for learning of dynamical systems
Open this publication in new window or tab >>Particle filters and Markov chains for learning of dynamical systems
2013 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2013. p. 42
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1530
Keywords
Bayesian learning, System identification, Sequential Monte Carlo, Markov chain Monte Carlo, Particle MCMC, Particle filters, Particle smoothers
National Category
Control Engineering Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-97692 (URN)10.3384/diss.diva-97692 (DOI)978-91-7519-559-9 (ISBN)
Public defence
2013-10-25, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 10:15 (English)
Opponent
Supervisors
Projects
CNDMCADICS
Funder
Swedish Research Council
Available from: 2013-10-08 Created: 2013-09-19 Last updated: 2024-01-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Lindsten, FredrikSchön, Thomas B.

Search in DiVA

By author/editor
Lindsten, FredrikSchön, Thomas B.
By organisation
Automatic ControlThe Institute of Technology
Control EngineeringProbability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1035 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf