liu.seSök publikationer i DiVA
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
On approximations and computations in probabilistic classification and in learning of graphical models
Linköpings universitet, Matematiska institutionen, Matematisk statistik. Linköpings universitet, Tekniska högskolan.
2007 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Model based probabilistic classification is heavily used in data mining and machine learning. For computational learning these models may need approximation steps however. One popular approximation in classification is to model the class conditional densities by factorization, which in the independence case is usually called the ’Naïve Bayes’ classifier. In general probabilistic independence cannot model all distributions exactly, and not much has been published on how much a discrete distribution can differ from the independence assumption. In this dissertation the approximation quality of factorizations is analyzed in two articles.

A specific class of factorizations is the factorizations represented by graphical models. Several challenges arise from the use of statistical methods for learning graphical models from data. Examples of problems include the increase in the number of graphical model structures as a function of the number of nodes, and the equivalence of statistical models determined by different graphical models. In one article an algorithm for learning graphical models is presented. In the final article an algorithm for clustering parts of DNA strings is developed, and a graphical representation for the remaining DNA part is learned.

Ort, förlag, år, upplaga, sidor
Matematiska institutionen , 2007. , s. 22
Serie
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1141
Nyckelord [en]
Mathematical statistics, factorizations, probabilistic classification, nodes, DNA strings
Nationell ämneskategori
Sannolikhetsteori och statistik
Identifikatorer
URN: urn:nbn:se:liu:diva-11429ISBN: 978-91-85895-58-8 (tryckt)OAI: oai:DiVA.org:liu-11429DiVA, id: diva2:17846
Disputation
2007-12-14, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 10:15 (Engelska)
Opponent
Tillgänglig från: 2008-03-31 Skapad: 2008-03-31 Senast uppdaterad: 2012-11-21
Delarbeten
1. Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation
Öppna denna publikation i ny flik eller fönster >>Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation
2006 (Engelska)Ingår i: Journal of Machine Learning Research, ISSN 1532-4435, Vol. 7, s. 2449-2480Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In many pattern recognition/classification problem the true class conditional model and class probabilities are approximated for reasons of reducing complexity and/or of statistical estimation. The approximated classifier is expected to have worse performance, here measured by the probability of correct classification. We present an analysis valid in general, and easily computable formulas for estimating the degradation in probability of correct classification when compared to the optimal classifier. An example of an approximation is the Na¨ıve Bayes classifier. We show that the performance of the Naïve Bayes depends on the degree of functional dependence between the features and labels. We provide a sufficient condition for zero loss of performance, too.

Nyckelord
Bayesian networks, na¨ıve Bayes, plug-in classifier, Kolmogorov distance of variation, variational learning
Nationell ämneskategori
Matematik
Identifikatorer
urn:nbn:se:liu:diva-13104 (URN)
Tillgänglig från: 2008-03-31 Skapad: 2008-03-31
2. Concentrated or non-concentrated discrete distributions are almost independent
Öppna denna publikation i ny flik eller fönster >>Concentrated or non-concentrated discrete distributions are almost independent
2007 (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

The task of approximating a simultaneous distribution with a product of distributions in a single variable is important in the theory and applications of classification and learning, probabilistic reasoning, and random algmithms. The evaluation of the goodness of this approximation by statistical independence amounts to bounding uniformly upwards the difference between a joint distribution and the product of the distributions (marginals). In this paper we develop a bound that uses information about the most probable state to find a sharp estimate, which is often as sharp as possible. We also examine the extreme cases of concentration and non-conccntmtion, respectively, of the approximated distribution.

Nationell ämneskategori
Matematik
Identifikatorer
urn:nbn:se:liu:diva-13105 (URN)
Tillgänglig från: 2008-03-31 Skapad: 2008-03-31 Senast uppdaterad: 2014-09-29
3. Parallell interacting MCMC for learning of topologies of graphical models
Öppna denna publikation i ny flik eller fönster >>Parallell interacting MCMC for learning of topologies of graphical models
2008 (Engelska)Ingår i: Data mining and knowledge discovery, ISSN 1384-5810, E-ISSN 1573-756X, Vol. 17, nr 3, s. 431-456Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Automated statistical learning of graphical models from data has attained a considerable degree of interest in the machine learning and related literature. Many authors have discussed and/or demonstrated the need for consistent stochastic search methods that would not be as prone to yield locally optimal model structures as simple greedy methods. However, at the same time most of the stochastic search methods are based on a standard Metropolis–Hastings theory that necessitates the use of relatively simple random proposals and prevents the utilization of intelligent and efficient search operators. Here we derive an algorithm for learning topologies of graphical models from samples of a finite set of discrete variables by utilizing and further enhancing a recently introduced theory for non-reversible parallel interacting Markov chain Monte Carlo-style computation. In particular, we illustrate how the non-reversible approach allows for novel type of creativity in the design of search operators. Also, the parallel aspect of our method illustrates well the advantages of the adaptive nature of search operators to avoid trapping states in the vicinity of locally optimal network topologies.

Nyckelord
MCMC, Equivalence search, Learning graphical models
Nationell ämneskategori
Matematik
Identifikatorer
urn:nbn:se:liu:diva-13106 (URN)10.1007/s10618-008-0099-9 (DOI)
Tillgänglig från: 2008-03-31 Skapad: 2008-03-31 Senast uppdaterad: 2017-12-13
4. A bayesian random fragment insertion model for de novo detection of DNA regulatory binding regions
Öppna denna publikation i ny flik eller fönster >>A bayesian random fragment insertion model for de novo detection of DNA regulatory binding regions
2007 (Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

Identification of regulatory binding motifs within DNA sequences is a commonly occurring problem in computationnl bioinformatics. A wide variety of statistical approaches have been proposed in the literature to either scan for previously known motif types or to attempt de novo identification of a fixed number (typically one) of putative motifs. Most approaches assume the existence of reliable biodatabasc information to build probabilistic a priori description of the motif classes. No method has been previously proposed for finding the number of putative de novo motif types and their positions within a set of DNA sequences. As the number of sequenced genomes from a wide variety of organisms is constantly increasing, there is a clear need for such methods. Here we introduce a Bayesian unsupervised approach for this purpose by using recent advances in the theory of predictive classification and Markov chain Monte Carlo computation. Our modelling framework enables formal statistical inference in a large-scale sequence screening and we illustrate it by a set of examples.

Nationell ämneskategori
Matematik
Identifikatorer
urn:nbn:se:liu:diva-13107 (URN)
Tillgänglig från: 2008-03-31 Skapad: 2008-03-31 Senast uppdaterad: 2012-11-21

Open Access i DiVA

Fulltext saknas i DiVA

Personposter BETA

Ekdahl, Magnus

Sök vidare i DiVA

Av författaren/redaktören
Ekdahl, Magnus
Av organisationen
Matematisk statistikTekniska högskolan
Sannolikhetsteori och statistik

Sök vidare utanför DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 1940 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf