The empirical KL-measure of MCMC convergence
(English)Manuscript (preprint) (Other academic)
A new measure based on comparison of empirical distributions for sub sequences or parallel runs and the full sequence of Markov chain Monte Carlo simulation, is proposed as a criterion of stability or convergence. The measure is also put forward as a loss function when the design, including the proposal function, of a Markov chain is optimised. The comparison of empirical distributions is based on a Kullback-Leibler (KL) type distance over value sets defined by the output data. The singularity problem for such a measure is removed in a simple way.
The leading term in a series expansion of the measure gives an interpretation in terms of the relative uncertainty of cell frequencies measured by their average coefficient of variation. The validity of the leading term is studied by simulation in two analytically tractable cases with Markov dependency and selected acceptance rates. The agreement between the leading term and the KL-measure is close, in particular when the simulations are extensive enough for stable results. Comparisons with established criteria turn out favourably in examples studied.
Convergence diagnostics, Kullback-Leibler distance, proposal distribution, parallel chains, single chain
IdentifiersURN: urn:nbn:se:liu:diva-86859OAI: oai:DiVA.org:liu-86859DiVA: diva2:582901