liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1 - 7 av 7
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Lindsten, Fredrik
    et al.
    Uppsala University, Sweden.
    Johansen, A. M.
    University of Warwick, England.
    Andersson Naesseth, Christian
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska fakulteten.
    Kirkpatrick, B.
    Intrepid Net Comp, MT USA.
    Schön, T. B.
    Uppsala University, Sweden.
    Aston, J. A. D.
    University of Cambridge, England.
    Bouchard-Cote, A.
    University of British Columbia, Canada.
    Divide-and-Conquer With Sequential Monte Carlo2017Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 26, nr 2, s. 445-458Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose a novel class of Sequential Monte Carlo (SMC) algorithms, appropriate for inference in probabilistic graphical models. This class of algorithms adopts a divide-and-conquer approach based upon an auxiliary tree-structured decomposition of the model of interest, turning the overall inferential task into a collection of recursively solved subproblems. The proposed method is applicable to a broad class of probabilistic graphical models, including models with loops. Unlike a standard SMC sampler, the proposed divide-and-conquer SMC employs multiple independent populations of weighted particles, which are resampled, merged, and propagated as the method progresses. We illustrate empirically that this approach can outperform standard methods in terms of the accuracy of the posterior expectation and marginal likelihood approximations. Divide-and-conquer SMC also opens up novel parallel implementation options and the possibility of concentrating the computational effort on the most challenging subproblems. We demonstrate its performance on a Markov random field and on a hierarchical logistic regression problem. Supplementary materials including proofs and additional numerical results are available online.

  • 2.
    Magnusson, Måns
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Tekniska fakulteten.
    Jonsson, Leif
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska fakulteten. Ericsson Res, Sweden.
    Villani, Mattias
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Tekniska fakulteten.
    Broman, David
    School of Information and Communication Technology, Royal Institute of Technology KTH, Stockholm, Sweden.
    Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models2018Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 27, nr 2, s. 449-463Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Topic models, and more specifically the class of Latent Dirichlet Allocation (LDA), are widely used for probabilistic modeling of text. MCMC sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler.

  • 3.
    Nott, David J.
    et al.
    National University of Singapore.
    Tan, Siew Li
    National University of Singapore.
    Villani, Mattias
    Linköpings universitet, Institutionen för datavetenskap, Statistik. Linköpings universitet, Tekniska högskolan.
    Kohn, Robert
    University of New South Wales, Sydney, Australia.
    Regression density estimation with variational methods and stochastic approximation2012Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 21, nr 3, s. 797-820Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Regression density estimation is the problem of flexibly estimating a response distribution as a function of covariates. An important approach to regression density estimation uses finite mixture models and our article considers flexible mixtures of heteroscedastic regression (MHR) models where the response distribution is a normal mixture, with the component means, variances and mixture weights all varying as a function of covariates. Our article develops fast variational approximation methods for inference. Our motivation is that alternative computationally intensive MCMC methods for fitting mixture models are difficult to apply when it is desired to fit models repeatedly in exploratory analysis and model choice. Our article makes three contributions. First, a variational approximation for MHR models is described where the variational lower bound is in closed form. Second, the basic approximation can be improved by using stochastic approximation methods to perturb the initial solution to attain higher accuracy. Third, the advantages of our approach for model choice and evaluation compared to MCMC based approaches are illustrated. These advantages are particularly compelling for time series data where repeated refitting for one step ahead prediction in model choice and diagnostics and in rolling window computations is very common. Supplemental materials for the article are available online.

    Ladda ner fulltext (pdf)
    fulltext
  • 4.
    Quiroz, Matias
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Statistik. Linköpings universitet, Tekniska fakulteten. Research Division, Sveriges Riksbank, Stockholm, Sweden.
    Tran, Minh-Ngoc
    Discipline of Business Analytics, University of Sydney, Camperdown NSW, Australia.
    Villani, Mattias
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Tekniska fakulteten.
    Kohn, Robert
    Australian School of Business, University of New South Wales, Sydney NSW, Australia.
    Speeding up MCMC by Delayed Acceptance and Data Subsampling2018Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 27, nr 1, s. 12-22Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The complexity of the Metropolis–Hastings (MH) algorithm arises from the requirement of a likelihood evaluation for the full dataset in each iteration. One solution has been proposed to speed up the algorithm by a delayed acceptance approach where the acceptance decision proceeds in two stages. In the first stage, an estimate of the likelihood based on a random subsample determines if it is likely that the draw will be accepted and, if so, the second stage uses the full data likelihood to decide upon final acceptance. Evaluating the full data likelihood is thus avoided for draws that are unlikely to be accepted. We propose a more precise likelihood estimator that incorporates auxiliary information about the full data likelihood while only operating on a sparse set of the data. We prove that the resulting delayed acceptance MH is more efficient. The caveat of this approach is that the full dataset needs to be evaluated in the second stage. We therefore propose to substitute this evaluation by an estimate and construct a state-dependent approximation thereof to use in the first stage. This results in an algorithm that (i) can use a smaller subsample m by leveraging on recent advances in Pseudo-Marginal MH (PMMH) and (ii) is provably within O(m^-2) of the true posterior.

    Ladda ner fulltext (pdf)
    fulltext
  • 5.
    Quiroz, Matias
    et al.
    Univ Technol Sydney, Australia; Sveriges Riksbank, Sweden.
    Tran, Minh-Ngoc
    Univ Sydney, Australia.
    Villani, Mattias
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Filosofiska fakulteten. Stockholm Univ, Sweden.
    Kohn, Robert
    Univ New South Wales, Australia.
    Dang, Khue-Dung
    Univ Technol Sydney, Australia.
    The Block-Poisson Estimator for Optimally Tuned Exact Subsampling MCMC2021Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 30, nr 4, s. 877-888Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Speeding upMarkov chainMonte Carlo (MCMC) for datasets withmany observations by data subsampling has recently received considerable attention. A pseudo-marginalMCMCmethod is proposed that estimates the likelihood by data subsampling using a block-Poisson estimator. The estimator is a product of Poisson estimators, allowing us to update a single block of subsample indicators in each MCMC iteration so that a desired correlation is achieved between the logs of successive likelihood estimates. This is important since pseudo-marginal MCMC with positively correlated likelihood estimates can use substantially smaller subsamples without adversely affecting the sampling efficiency. The block-Poisson estimator is unbiased but not necessarily positive, so the algorithm runs the MCMC on the absolute value of the likelihood estimator and uses an importance sampling correction to obtain consistent estimates of the posterior mean of any function of the parameters. Our article derives guidelines to select the optimal tuning parameters for our method and shows that it compares very favorably to regular MCMC without subsampling, and to two other recently proposed exact subsampling approaches in the literature. Supplementary materials for this article are available online.

  • 6.
    Sidén, Per
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Filosofiska fakulteten.
    Lindgren, Finn
    Univ Edinburgh, Scotland.
    Bolin, David
    Chalmers, Sweden; Univ Gothenburg, Sweden.
    Villani, Mattias
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Filosofiska fakulteten.
    Efficient Covariance Approximations for Large Sparse Precision Matrices2018Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 27, nr 4, s. 898-909Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The use of sparse precision (inverse covariance) matrices has become popular because they allow for efficient algorithms for joint inference in high-dimensional models. Many applications require the computation of certain elements of the covariance matrix, such as the marginal variances, which may be nontrivial to obtain when the dimension is large. This article introduces a fast Rao-Blackwellized Monte Carlo sampling-based method for efficiently approximating selected elements of the covariance matrix. The variance and confidence bounds of the approximations can be precisely estimated without additional computational costs. Furthermore, a method that iterates over subdomains is introduced, and is shown to additionally reduce the approximation errors to practically negligible levels in an application on functional magnetic resonance imaging data. Both methods have low memory requirements, which is typically the bottleneck for competing direct methods.

    Ladda ner fulltext (pdf)
    fulltext
  • 7.
    Svahn, Caroline
    et al.
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Filosofiska fakulteten.
    Sysoev, Oleg
    Linköpings universitet, Institutionen för datavetenskap, Statistik och maskininlärning. Linköpings universitet, Filosofiska fakulteten.
    Selective Imputation of Covariates in High Dimensional Censored Data2022Ingår i: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 31, nr 4, s. 1397-1405Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Efficient modeling of censored data, that is, data which are restricted by some detection limit or truncation, is important for many applications. Ignoring the censoring can be problematic as valuable information may be missing and restoration of these censored values may significantly improve the quality of models. There are many scenarios where one may encounter censored data: survival data, interval-censored data or data with a lower limit of detection. Strategies to handle censored data are plenty, however, little effort has been made to handle censored data of high dimension. In this article, we present a selective multiple imputation approach for predictive modeling when a larger number of covariates are subject to censoring. Our method allows for iterative, subject-wise selection of covariates to impute in order to achieve a fast and accurate predictive model. The algorithm furthermore selects values for imputation which are likely to provide important information if imputed. In contrast to previously proposed methods, our approach is fully nonparametric and therefore, very flexible. We demonstrate that, in comparison to previous work, our model achieves faster execution and often comparable accuracy in a simulated example as well as predicting signal strength in radio network data. for this article are available online.

    Ladda ner fulltext (pdf)
    fulltext
1 - 7 av 7
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf