Open this publication in new window or tab >>2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]
Recent advances in machine learning (ML) have demonstrated that statistical models can be trained to recognize complicated patterns and make sophisticated decisions, often outperforming human capabilities. However, current ML-based systems sometimes exhibit faulty behavior, which prevents their use in safety-critical applications. Furthermore, the increased autonomy and complexity of ML-based systems raise additional trustworthiness concerns such as explainability. To mitigate risks associated with these concerns, the European Union has agreed on a legal framework called the AI Act that governs ML-based products that are placed on the European market.
To meet the goals set out by the AI Act with diligence, manufacturers of critical systems that leverage machine learning need new rigorous and scalable approaches to ensure trustworthiness. One natural pathway towards this aim is to leverage formal methods, a class of reasoning approaches that has proved useful in the past when arguing for safety. Unfortunately, these methods often suffer from computational scalability issues, a problem that becomes even more challenging when applied to complex ML-based systems.
This dissertation seeks to assess and improve the trustworthiness of a class of machine learning models called tree ensembles. To this end, a reasoning engine based on abstract interpretation is developed from the ground up, exploiting unique characteristics of tree ensembles for improved runtime performance. The engine is designed to support deductive and abductive reasoning with soundness and completeness guarantees, hence enabling formal verification and the ability to accurately explain why a tree ensemble arrives at a certain prediction.
Through extensive experimentation, we demonstrate speedup improvements of several orders of magnitude compared to current state-of-the-art approaches. More importantly, we show that many classifiers based on tree ensembles are extremely sensitive to additive noise, despite demonstrating high accuracy. For example, in one case study involving the classification of images of handwritten digits, we find that changing the light intensity of a single pixel in an image by as little as 0.1% causes some tree ensembles to misclassify the depicted digit. However, we also show that it is possible to compute provably correct explanations without superfluous information, called minimal explanations, for predictions made by complex tree ensembles. These types of explanations can help to pinpoint exactly which inputs are relevant in a particular classification. Moreover, we explore approaches for computing explanations that are also globally optimal with respect to a cost function, called minimum explanations. These types of explanations can be tailored for specific target audiences, e.g., for engineers trying to improve robustness, for system operators making critical decisions, or for incident investigators seeking the root cause of a hazard.
Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2025. p. 45
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2454
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:liu:diva-213383 (URN)10.3384/9789181181296 (DOI)9789181181289 (ISBN)9789181181296 (ISBN)
Public defence
2025-08-25, TEMCAS, TEMA Building, Campus Valla, Linköping, 13:15 (English)
Opponent
Supervisors
Note
Funding agencies: This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Some computing resources were provided by the Swedish National Infrastructure for Computing (SNIC) and the Swedish National Supercomputer Centre (NSC).
2025-04-302025-04-302025-05-05Bibliographically approved