liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Ekdahl, Magnus
Publications (9 of 9) Show all publications
Corander, J., Ekdahl, M. & Koski, T. (2008). Parallell interacting MCMC for learning of topologies of graphical models. Data mining and knowledge discovery, 17(3), 431-456
Open this publication in new window or tab >>Parallell interacting MCMC for learning of topologies of graphical models
2008 (English)In: Data mining and knowledge discovery, ISSN 1384-5810, E-ISSN 1573-756X, Vol. 17, no 3, p. 431-456Article in journal (Refereed) Published
Abstract [en]

Automated statistical learning of graphical models from data has attained a considerable degree of interest in the machine learning and related literature. Many authors have discussed and/or demonstrated the need for consistent stochastic search methods that would not be as prone to yield locally optimal model structures as simple greedy methods. However, at the same time most of the stochastic search methods are based on a standard Metropolis–Hastings theory that necessitates the use of relatively simple random proposals and prevents the utilization of intelligent and efficient search operators. Here we derive an algorithm for learning topologies of graphical models from samples of a finite set of discrete variables by utilizing and further enhancing a recently introduced theory for non-reversible parallel interacting Markov chain Monte Carlo-style computation. In particular, we illustrate how the non-reversible approach allows for novel type of creativity in the design of search operators. Also, the parallel aspect of our method illustrates well the advantages of the adaptive nature of search operators to avoid trapping states in the vicinity of locally optimal network topologies.

Keywords
MCMC, Equivalence search, Learning graphical models
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13106 (URN)10.1007/s10618-008-0099-9 (DOI)
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2017-12-13
Corander, J., Ekdahl, M. & Koski, T. (2007). A bayesian random fragment insertion model for de novo detection of DNA regulatory binding regions.
Open this publication in new window or tab >>A bayesian random fragment insertion model for de novo detection of DNA regulatory binding regions
2007 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Identification of regulatory binding motifs within DNA sequences is a commonly occurring problem in computationnl bioinformatics. A wide variety of statistical approaches have been proposed in the literature to either scan for previously known motif types or to attempt de novo identification of a fixed number (typically one) of putative motifs. Most approaches assume the existence of reliable biodatabasc information to build probabilistic a priori description of the motif classes. No method has been previously proposed for finding the number of putative de novo motif types and their positions within a set of DNA sequences. As the number of sequenced genomes from a wide variety of organisms is constantly increasing, there is a clear need for such methods. Here we introduce a Bayesian unsupervised approach for this purpose by using recent advances in the theory of predictive classification and Markov chain Monte Carlo computation. Our modelling framework enables formal statistical inference in a large-scale sequence screening and we illustrate it by a set of examples.

National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13107 (URN)
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2012-11-21
Ekdahl, M., Koski, T. & Ohlson, M. (2007). Concentrated or non-concentrated discrete distributions are almost independent.
Open this publication in new window or tab >>Concentrated or non-concentrated discrete distributions are almost independent
2007 (English)Manuscript (preprint) (Other academic)
Abstract [en]

The task of approximating a simultaneous distribution with a product of distributions in a single variable is important in the theory and applications of classification and learning, probabilistic reasoning, and random algmithms. The evaluation of the goodness of this approximation by statistical independence amounts to bounding uniformly upwards the difference between a joint distribution and the product of the distributions (marginals). In this paper we develop a bound that uses information about the most probable state to find a sharp estimate, which is often as sharp as possible. We also examine the extreme cases of concentration and non-conccntmtion, respectively, of the approximated distribution.

National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13105 (URN)
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2014-09-29
Ekdahl, M. (2007). On approximations and computations in probabilistic classification and in learning of graphical models. (Doctoral dissertation). Matematiska institutionen
Open this publication in new window or tab >>On approximations and computations in probabilistic classification and in learning of graphical models
2007 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Model based probabilistic classification is heavily used in data mining and machine learning. For computational learning these models may need approximation steps however. One popular approximation in classification is to model the class conditional densities by factorization, which in the independence case is usually called the ’Naïve Bayes’ classifier. In general probabilistic independence cannot model all distributions exactly, and not much has been published on how much a discrete distribution can differ from the independence assumption. In this dissertation the approximation quality of factorizations is analyzed in two articles.

A specific class of factorizations is the factorizations represented by graphical models. Several challenges arise from the use of statistical methods for learning graphical models from data. Examples of problems include the increase in the number of graphical model structures as a function of the number of nodes, and the equivalence of statistical models determined by different graphical models. In one article an algorithm for learning graphical models is presented. In the final article an algorithm for clustering parts of DNA strings is developed, and a graphical representation for the remaining DNA part is learned.

Place, publisher, year, edition, pages
Matematiska institutionen, 2007. p. 22
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1141
Keywords
Mathematical statistics, factorizations, probabilistic classification, nodes, DNA strings
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-11429 (URN)978-91-85895-58-8 (ISBN)
Public defence
2007-12-14, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 10:15 (English)
Opponent
Available from: 2008-03-31 Created: 2008-03-31 Last updated: 2012-11-21
Ekdahl, M. & Koski, T. (2007). On Concentration of Discrete Distributions with Applications to Supervised Learning of Classifiers. In: Petra Perner (Ed.), Petra Perner (Ed.), Machine Learning and Data Mining in Pattern Recognition: 5th International Conference, MLDM 2007, Leipzig, Germany, July 18-20, 2007. Proceedings (pp. 2-16). Springer Berlin/Heidelberg
Open this publication in new window or tab >>On Concentration of Discrete Distributions with Applications to Supervised Learning of Classifiers
2007 (English)In: Machine Learning and Data Mining in Pattern Recognition: 5th International Conference, MLDM 2007, Leipzig, Germany, July 18-20, 2007. Proceedings / [ed] Petra Perner, Springer Berlin/Heidelberg, 2007, p. 2-16Chapter in book (Refereed)
Abstract [en]

Computational procedures using independence assumptions in various forms are popular in machine learning, although checks on empirical data have given inconclusive results about their impact. Some theoretical understanding of when they work is available, but a definite answer seems to be lacking. This paper derives distributions that maximizes the statewise difference to the respective product of marginals. These distributions are, in a sense the worst distribution for predicting an outcome of the data generating mechanism by independence. We also restrict the scope of new theoretical results by showing explicitly that, depending on context, independent ('Naïve') classifiers can be as bad as tossing coins. Regardless of this, independence may beat the generating model in learning supervised classification and we explicitly provide one such scenario.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2007
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 4571
Keywords
independence, classification, supervised learning, pattern recognition, prediction
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-38249 (URN)10.1007/978-3-540-73499-4_2 (DOI)000248523200001 ()43265 (Local ID)978-3-540-73498-7 (ISBN)978-3-540-73499-4 (ISBN)3-540-73498-8 (ISBN)43265 (Archive number)43265 (OAI)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-26Bibliographically approved
Ekdahl, M. (2006). Approximations of Bayes Classifiers for Statistical Learning of Clusters. (Licentiate dissertation). : Matematiska institutionen
Open this publication in new window or tab >>Approximations of Bayes Classifiers for Statistical Learning of Clusters
2006 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

It is rarely possible to use an optimal classifier. Often the classifier used for a specific problem is an approximation of the optimal classifier. Methods are presented for evaluating the performance of an approximation in the model class of Bayesian Networks. Specifically for the approximation of class conditional independence a bound for the performance is sharpened.

The class conditional independence approximation is connected to the minimum description length principle (MDL), which is connected to Jeffreys’ prior through commonly used assumptions. One algorithm for unsupervised classification is presented and compared against other unsupervised classifiers on three data sets.

Place, publisher, year, edition, pages
Matematiska institutionen, 2006. p. 86
Series
Linköping Studies in Science and Technology. Thesis, ISSN 0280-7971 ; 1230
Keywords
Pattern Recognition, Stochastic Complexity, Naïve Bayes, Bayesian Network, Classification, Clustering, Chow-Liu trees
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-5856 (URN)91-85497-21-5 (ISBN)
Presentation
2006-04-05, , Hus B, Campus Valla, Linköpings universitet, Linköping, 15:15 (English)
Opponent
Supervisors
Note
Report code: LiU-TEK-LIC 2006:11.Available from: 2006-02-22 Created: 2006-02-22
Ekdahl, M. & Koski, T. (2006). Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation. Journal of Machine Learning Research, 7, 2449-2480
Open this publication in new window or tab >>Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation
2006 (English)In: Journal of Machine Learning Research, ISSN 1532-4435, Vol. 7, p. 2449-2480Article in journal (Refereed) Published
Abstract [en]

In many pattern recognition/classification problem the true class conditional model and class probabilities are approximated for reasons of reducing complexity and/or of statistical estimation. The approximated classifier is expected to have worse performance, here measured by the probability of correct classification. We present an analysis valid in general, and easily computable formulas for estimating the degradation in probability of correct classification when compared to the optimal classifier. An example of an approximation is the Na¨ıve Bayes classifier. We show that the performance of the Naïve Bayes depends on the degree of functional dependence between the features and labels. We provide a sufficient condition for zero loss of performance, too.

Keywords
Bayesian networks, na¨ıve Bayes, plug-in classifier, Kolmogorov distance of variation, variational learning
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-13104 (URN)
Available from: 2008-03-31 Created: 2008-03-31
Ekdahl, M. & Koski, T. (2006). On the Performance of Approximations of Bayesian Networks in Model-. In: The Annual Workshop of the Swedish Artificial Intelligence Society,2006 (pp. 73). Umeå: SAIS
Open this publication in new window or tab >>On the Performance of Approximations of Bayesian Networks in Model-
2006 (English)In: The Annual Workshop of the Swedish Artificial Intelligence Society,2006, Umeå: SAIS , 2006, p. 73-Conference paper, Published paper (Refereed)
Abstract [en]

When the true class conditional model and class probabilities are approximated in a pattern recognition/classification problem the performance of the optimal classifier is expected to deteriorate. But calculating this reduction is far from trivial in the general case. We present one generalization, and easily computable formulas for estimating the degradation in performance with respect to the optimal classifier. An example of an approximation is the Naive Bayes classifier. We generalize and sharpen results for evaluating this classifier.

Place, publisher, year, edition, pages
Umeå: SAIS, 2006
Keywords
Plug-in classifiers, Naive Bayes
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-34261 (URN)21109 (Local ID)21109 (Archive number)21109 (OAI)
Available from: 2009-10-10 Created: 2009-10-10
Ekdahl, M. (2004). Stokastisk komplexitet i klustringsanalys. In: Workshop i tillämpad matematik,2004.
Open this publication in new window or tab >>Stokastisk komplexitet i klustringsanalys
2004 (Swedish)In: Workshop i tillämpad matematik,2004, 2004Conference paper, Published paper (Other academic)
National Category
Mathematics
Identifiers
urn:nbn:se:liu:diva-23222 (URN)2635 (Local ID)2635 (Archive number)2635 (OAI)
Available from: 2009-10-07 Created: 2009-10-07
Organisations

Search in DiVA

Show all publications