Approximate inference in probabilistic graphical models (PGMs) can be grouped into deterministic methods and Monte-Carlo-based methods. The former can often provide accurate and rapid inferences, but are typically associated with biases that are hard to quantify. The latter enjoy asymptotic consistency, but can suffer from high computational costs. In this paper we present a way of bridging the gap between deterministic and stochastic inference. Specifically, we suggest an efficient sequential Monte Carlo (SMC) algorithm for PGMs which can leverage the output from deterministic inference methods. While generally applicable, we show explicitly how this can be done with loopy belief propagation, expectation propagation, and Laplace approximations. The resulting algorithm can be viewed as a post-correction of the biases associated with these methods and, indeed, numerical results show clear improvements over the baseline deterministic methods as well as over "plain" SMC.
Funding Agencies|Swedish Foundation for Strategic Research (SSF) via the project Probabilistic Modeling and Inference for Machine Learning [ICA16-0015]; Swedish Research Council (VR) via the projects Learning of Large-Scale Probabilistic Dynamical Models [2016-04278]; NewLEADS - New Directions in Learning Dynamical Systems [621-2016-06079]; Academy of Finland [274740, 284513, 312605]