liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On approximations and computations in probabilistic classification and in learning of graphical models
Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology.
2007 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Model based probabilistic classification is heavily used in data mining and machine learning. For computational learning these models may need approximation steps however. One popular approximation in classification is to model the class conditional densities by factorization, which in the independence case is usually called the ’Naïve Bayes’ classifier. In general probabilistic independence cannot model all distributions exactly, and not much has been published on how much a discrete distribution can differ from the independence assumption. In this dissertation the approximation quality of factorizations is analyzed in two articles.

A specific class of factorizations is the factorizations represented by graphical models. Several challenges arise from the use of statistical methods for learning graphical models from data. Examples of problems include the increase in the number of graphical model structures as a function of the number of nodes, and the equivalence of statistical models determined by different graphical models. In one article an algorithm for learning graphical models is presented. In the final article an algorithm for clustering parts of DNA strings is developed, and a graphical representation for the remaining DNA part is learned.

Place, publisher, year, edition, pages
Matematiska institutionen , 2007. , 22 p.
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1141
Keyword [en]
Mathematical statistics, factorizations, probabilistic classification, nodes, DNA strings
National Category
Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:liu:diva-11429ISBN: 978-91-85895-58-8 (print)OAI: oai:DiVA.org:liu-11429DiVA: diva2:17846
Public defence
2007-12-14, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 10:15 (English)
Opponent
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2012-11-21
List of papers
1. Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation
Open this publication in new window or tab >>Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation
2006 (English)In: Journal of Machine Learning Research, ISSN 1532-4435, Vol. 7, 2449-2480 p.Article in journal (Refereed) Published
Abstract [en]

In many pattern recognition/classification problem the true class conditional model and class probabilities are approximated for reasons of reducing complexity and/or of statistical estimation. The approximated classifier is expected to have worse performance, here measured by the probability of correct classification. We present an analysis valid in general, and easily computable formulas for estimating the degradation in probability of correct classification when compared to the optimal classifier. An example of an approximation is the Na¨ıve Bayes classifier. We show that the performance of the Naïve Bayes depends on the degree of functional dependence between the features and labels. We provide a sufficient condition for zero loss of performance, too.

Keyword
Bayesian networks, na¨ıve Bayes, plug-in classifier, Kolmogorov distance of variation, variational learning
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13104 (URN)
Available from: 2008-03-31 Created: 2008-03-31
2. Concentrated or non-concentrated discrete distributions are almost independent
Open this publication in new window or tab >>Concentrated or non-concentrated discrete distributions are almost independent
2007 (English)Manuscript (preprint) (Other academic)
Abstract [en]

The task of approximating a simultaneous distribution with a product of distributions in a single variable is important in the theory and applications of classification and learning, probabilistic reasoning, and random algmithms. The evaluation of the goodness of this approximation by statistical independence amounts to bounding uniformly upwards the difference between a joint distribution and the product of the distributions (marginals). In this paper we develop a bound that uses information about the most probable state to find a sharp estimate, which is often as sharp as possible. We also examine the extreme cases of concentration and non-conccntmtion, respectively, of the approximated distribution.

National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13105 (URN)
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2014-09-29
3. Parallell interacting MCMC for learning of topologies of graphical models
Open this publication in new window or tab >>Parallell interacting MCMC for learning of topologies of graphical models
2008 (English)In: Data mining and knowledge discovery, ISSN 1384-5810, E-ISSN 1573-756X, Vol. 17, no 3, 431-456 p.Article in journal (Refereed) Published
Abstract [en]

Automated statistical learning of graphical models from data has attained a considerable degree of interest in the machine learning and related literature. Many authors have discussed and/or demonstrated the need for consistent stochastic search methods that would not be as prone to yield locally optimal model structures as simple greedy methods. However, at the same time most of the stochastic search methods are based on a standard Metropolis–Hastings theory that necessitates the use of relatively simple random proposals and prevents the utilization of intelligent and efficient search operators. Here we derive an algorithm for learning topologies of graphical models from samples of a finite set of discrete variables by utilizing and further enhancing a recently introduced theory for non-reversible parallel interacting Markov chain Monte Carlo-style computation. In particular, we illustrate how the non-reversible approach allows for novel type of creativity in the design of search operators. Also, the parallel aspect of our method illustrates well the advantages of the adaptive nature of search operators to avoid trapping states in the vicinity of locally optimal network topologies.

Keyword
MCMC, Equivalence search, Learning graphical models
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13106 (URN)10.1007/s10618-008-0099-9 (DOI)
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2012-11-21
4. A bayesian random fragment insertion model for de novo detection of DNA regulatory binding regions
Open this publication in new window or tab >>A bayesian random fragment insertion model for de novo detection of DNA regulatory binding regions
2007 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Identification of regulatory binding motifs within DNA sequences is a commonly occurring problem in computationnl bioinformatics. A wide variety of statistical approaches have been proposed in the literature to either scan for previously known motif types or to attempt de novo identification of a fixed number (typically one) of putative motifs. Most approaches assume the existence of reliable biodatabasc information to build probabilistic a priori description of the motif classes. No method has been previously proposed for finding the number of putative de novo motif types and their positions within a set of DNA sequences. As the number of sequenced genomes from a wide variety of organisms is constantly increasing, there is a clear need for such methods. Here we introduce a Bayesian unsupervised approach for this purpose by using recent advances in the theory of predictive classification and Markov chain Monte Carlo computation. Our modelling framework enables formal statistical inference in a large-scale sequence screening and we illustrate it by a set of examples.

National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13107 (URN)
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2012-11-21

Open Access in DiVA

No full text

Authority records BETA

Ekdahl, Magnus

Search in DiVA

By author/editor
Ekdahl, Magnus
By organisation
Mathematical Statistics The Institute of Technology
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1888 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf