liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Publications (2 of 2) Show all publications
Glaser, P., Widmann, D., Lindsten, F. & Gretton, A. (2023). Fast and Scalable Score-Based Kernel Calibration Tests. In: Thirty-Ninth Conference on Uncertainty in Artificial Intelligence: PMLR 216. Paper presented at 39th Conference on Uncertainty in Artificial Intelligence (UAI), Pittsburgh, PA, JUL 31-AUG 04, 2023. (pp. 691-700). JMLR-JOURNAL MACHINE LEARNING RESEARCH, 216
Open this publication in new window or tab >>Fast and Scalable Score-Based Kernel Calibration Tests
2023 (English)In: Thirty-Ninth Conference on Uncertainty in Artificial Intelligence: PMLR 216, JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2023, Vol. 216, p. 691-700Conference paper, Published paper (Refereed)
Abstract [en]

We introduce the Kernel Calibration Conditional Stein Discrepancy test (KCCSD test), a non-parametric, kernel-based test for assessing the calibration of probabilistic models with well-defined scores. In contrast to previous methods, our test avoids the need for possibly expensive expectation approximations while providing control over its type-I error. We achieve these improvements by using a new family of kernels for score-based probabilities that can be estimated without probability density samples, and by using a conditional goodness-of-fit criterion for the KCCSD test’s U-statistic. The tractability of the KCCSD test widens the surface area of calibration measures to new promising use-cases, such as regularization during model training. We demonstrate the properties of our test on various synthetic settings.

Place, publisher, year, edition, pages
JMLR-JOURNAL MACHINE LEARNING RESEARCH, 2023
National Category
Probability Theory and Statistics Computer Sciences
Identifiers
urn:nbn:se:liu:diva-204029 (URN)001222701100065 ()
Conference
39th Conference on Uncertainty in Artificial Intelligence (UAI), Pittsburgh, PA, JUL 31-AUG 04, 2023.
Note

Funding Agencies|Centre for Interdisciplinary Mathematics (CIM) at Uppsala University, Sweden; Swedish Research Council [621-2016-06079]; Kjell och Marta Beijer Foundation; Gatsby Charitable Foundation

Available from: 2024-06-01 Created: 2024-06-01 Last updated: 2024-09-06Bibliographically approved
Widmann, D., Lindsten, F. & Zachariah, D. (2021). Calibration tests beyond classification. In: ICLR 2021 - 9th International Conference on Learning Representations Proceedings: . Paper presented at International Conference on Learning Representations, Virtual conference, May 3 - May 7, 2021 (pp. 1-37). International Conference on Learning Representations, ICLR
Open this publication in new window or tab >>Calibration tests beyond classification
2021 (English)In: ICLR 2021 - 9th International Conference on Learning Representations Proceedings, International Conference on Learning Representations, ICLR , 2021, p. 1-37Conference paper, Published paper (Refereed)
Abstract [en]

Most supervised machine learning tasks are subject to irreducible prediction errors. Probabilistic predictive models address this limitation by providing probability distributions that represent a belief over plausible targets, rather than point estimates. Such models can be a valuable tool in decision-making under uncertainty, provided that the model output is meaningful and interpretable. Calibrated models guarantee that the probabilistic predictions are neither over- nor under-confident. In the machine learning literature, different measures and statistical tests have been proposed and studied for evaluating the calibration of classification models. For regression problems, however, research has been focused on a weaker condition of calibration based on predicted quantiles for real-valued targets. In this paper, we propose the first framework that unifies calibration evaluation and tests for probabilistic predictive models. It applies to any such model, including classification and regression models of arbitrary dimension. Furthermore, the framework generalizes existing measures and provides a more intuitive reformulation of a recently proposed framework for calibration in multi-class classification.

Place, publisher, year, edition, pages
International Conference on Learning Representations, ICLR, 2021
Keywords
calibration, uncertainty quantification, framework, integral probability metric, maximum mean discrepancy
National Category
Probability Theory and Statistics
Research subject
Mathematical Statistics
Identifiers
urn:nbn:se:liu:diva-188940 (URN)2-s2.0-85147937089 (Scopus ID)
Conference
International Conference on Learning Representations, Virtual conference, May 3 - May 7, 2021
Available from: 2020-12-23 Created: 2022-10-03 Last updated: 2024-08-23Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9282-053x

Search in DiVA

Show all publications