liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Human Ratings Do Not Reflect Downstream Utility: A Study of Free-Text Explanations for Model Predictions
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering. (NLP)ORCID iD: 0009-0006-1001-0546
Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering. (NLP)
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering. (NLP)ORCID iD: 0000-0002-2492-9872
2022 (English)In: Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP / [ed] Jasmijn Bastings, Yonatan Belinkov, Yanai Elazar, Dieuwke Hupkes, Naomi Saphra, Sarah Wiegreffe, Association for Computational Linguistics (ACL) , 2022, Vol. 5, p. 164-177, article id 2022.blackboxnlp-1.14Conference paper, Published paper (Refereed)
Abstract [en]

Models able to generate free-text rationales that explain their output have been proposed as an important step towards interpretable NLP for “reasoning” tasks such as natural language inference and commonsense question answering. However, the relative merits of different architectures and types of rationales are not well understood and hard to measure. In this paper, we contribute two insights to this line of research: First, we find that models trained on gold explanations learn to rely on these but, in the case of the more challenging question answering data set we use, fail when given generated explanations at test time. However, additional fine-tuning on generated explanations teaches the model to distinguish between reliable and unreliable information in explanations. Second, we compare explanations by a generation-only model to those generated by a self-rationalizing model and find that, while the former score higher in terms of validity, factual correctness, and similarity to gold explanations, they are not more useful for downstream classification. We observe that the self-rationalizing model is prone to hallucination, which is punished by most metrics but may add useful context for the classification step.

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL) , 2022. Vol. 5, p. 164-177, article id 2022.blackboxnlp-1.14
Keywords [en]
Large Language Models, Neural Networks, Transformers, Interpretability, Explainability
National Category
Natural Language Processing Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-195615DOI: 10.18653/v1/2022.blackboxnlp-1.14Scopus ID: 2-s2.0-85152897033OAI: oai:DiVA.org:liu-195615DiVA, id: diva2:1773126
Conference
BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, December 8, 2022
Available from: 2023-06-22 Created: 2023-06-22 Last updated: 2025-02-01Bibliographically approved
In thesis
1. Understanding Large Language Models: Towards Rigorous and Targeted Interpretability Using Probing Classifiers and Self-Rationalisation
Open this publication in new window or tab >>Understanding Large Language Models: Towards Rigorous and Targeted Interpretability Using Probing Classifiers and Self-Rationalisation
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Large language models (LLMs) have become the base of many natural language processing (NLP) systems due to their performance and easy adaptability to various tasks. However, much about their inner workings is still unknown. LLMs have many millions or billions of parameters, and large parts of their training happen in a self-supervised fashion: They simply learn to predict the next word, or missing words, in a sequence. This is effective for picking up a wide range of linguistic, factual and relational information, but it implies that it is not trivial what exactly is learned, and how it is represented within the LLM. 

In this thesis, I present our work on methods contributing to better understanding LLMs. The work can be grouped into two approaches. The first lies within the field of interpretability, which is concerned with understanding the internal workings of the LLMs. Specifically, we analyse and refine a tool called probing classifiers that inspects the intermediate representations of LLMs, focusing on what roles the various layers of the neural model play. This helps us to get a global understanding of how information is structured in the model. I present our work on assessing and improving the probing methodologies. We developed a framework to clarify the limitations of past methods, showing that all common controls are insufficient. Based on this, we proposed more restrictive probing setups by creating artificial distribution shifts. We developed new metrics for the evaluation of probing classifiers that move the focus from the overall information that the layer contains to differences in information content across the LLM. 

The second approach is concerned with explainability, specifically with self-rationalising models that generate free-text explanations along with their predictions. This is an instance of local understandability: We obtain justifications for individual predictions. In this setup, however, the generation of the explanations is just as opaque as the generation of the predictions. Therefore, our work in this field focuses on better understanding the properties of the generated explanations. We evaluate the downstream performance of a classifier with explanations generated by different model pipelines and compare it to human ratings of the explanations. Our results indicate that the properties that increase the downstream performance differ from those that humans appreciate when evaluating an explanation. Finally, we annotate explanations generated by an LLM for properties that human explanations typically have and discuss the effects those properties have on different user groups. 

While a detailed understanding of the inner workings of LLMs is still unfeasible, I argue that the techniques and analyses presented in this work can help to better understand LLMs, the linguistic knowledge they encode and their decision-making process. Together with knowledge about the models’ architecture, training data and training objective, such techniques can help us develop a robust high-level understanding of LLMs that can guide decisions on their deployment and potential improvements. 

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024. p. 81
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2364
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-201985 (URN)10.3384/9789180754712 (DOI)9789180754705 (ISBN)9789180754712 (ISBN)
Public defence
2024-04-18, Ada Lovelace, B-building, Campus Valla, Linköping, 14:00 (English)
Opponent
Supervisors
Available from: 2024-04-02 Created: 2024-04-02 Last updated: 2024-08-01Bibliographically approved

Open Access in DiVA

fulltext(335 kB)36 downloads
File information
File name FULLTEXT02.pdfFile size 335 kBChecksum SHA-512
e13494466e85db2d4697da1eab12c8ae6cd50dde33ea24f3a6539fb8e4686f6f1e1320e6c1510f1cdc72aa58a0782ceae98ce818e87effebe75d2bd8db6045bb
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusPaperVideo

Authority records

Kunz, JennyHolmström, OskarKuhlmann, Marco

Search in DiVA

By author/editor
Kunz, JennyJirénius, MartinHolmström, OskarKuhlmann, Marco
By organisation
Artificial Intelligence and Integrated Computer SystemsFaculty of Science & EngineeringDepartment of Computer and Information Science
Natural Language ProcessingComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 36 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 169 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf