liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Speeding up MCMC by Delayed Acceptance and Data Subsampling
Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Science & Engineering. Research Division, Sveriges Riksbank, Stockholm, Sweden.
Discipline of Business Analytics, University of Sydney, Camperdown NSW, Australia.
Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering. (Division of Statistics and Machine Learning (STIMA))ORCID iD: 0000-0003-2786-2519
Australian School of Business, University of New South Wales, Sydney NSW, Australia.
2017 (English)In: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715Article in journal (Refereed) Published
Abstract [en]

The complexity of the Metropolis–Hastings (MH) algorithm arises from the requirement of a likelihood evaluation for the full dataset in each iteration. One solution has been proposed to speed up the algorithm by a delayed acceptance approach where the acceptance decision proceeds in two stages. In the first stage, an estimate of the likelihood based on a random subsample determines if it is likely that the draw will be accepted and, if so, the second stage uses the full data likelihood to decide upon final acceptance. Evaluating the full data likelihood is thus avoided for draws that are unlikely to be accepted. We propose a more precise likelihood estimator that incorporates auxiliary information about the full data likelihood while only operating on a sparse set of the data. We prove that the resulting delayed acceptance MH is more efficient. The caveat of this approach is that the full dataset needs to be evaluated in the second stage. We therefore propose to substitute this evaluation by an estimate and construct a state-dependent approximation thereof to use in the first stage. This results in an algorithm that (i) can use a smaller subsample m by leveraging on recent advances in Pseudo-Marginal MH (PMMH) and (ii) is provably within O(m^-2) of the true posterior.

Place, publisher, year, edition, pages
Taylor & Francis Group, 2017.
Keyword [en]
Bayesian inference, Delayed acceptance MCMC, Large data, Markov chain Monte Carlo, Survey sampling
National Category
Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:liu:diva-140873DOI: 10.1080/10618600.2017.1307117Scopus ID: 2-s2.0-85026419157OAI: oai:DiVA.org:liu-140873DiVA: diva2:1141080
Funder
Swedish Foundation for Strategic Research , RIT 15-0097
Available from: 2017-09-13 Created: 2017-09-13 Last updated: 2017-09-20Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Quiroz, MatiasVillani, Mattias
By organisation
StatisticsFaculty of Science & EngineeringThe Division of Statistics and Machine Learning
In the same journal
Journal of Computational And Graphical Statistics
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 5 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf