liu.seSearch for publications in DiVA
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Verification Staircase: a Design Strategy for Actionable Explanations
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-7014-8874
Sectra AB, Linköping, Sweden.
2020 (English)In: Proceedings of the Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies co-located with 25th International Conference on Intelligent User Interfaces (IUI 2020), Cagliari, Italy, March 17, 2020 / [ed] Alison Smith-Renner and Styliani Kleanthous and Brian Lim and Tsvi Kuflik and Simone Stumpf and Jahna Otterbacher and Advait Sarkar and Casey Dugan and Avital Shulner Tal, Aachen: CEUR-WS.org , 2020, Vol. 2582Conference paper, Published paper (Refereed)
Abstract [en]

What if the trust in the output of a predictive model could be acted upon in richer ways than a simple binary decision of accept or reject? Designing assistive AI tools for medical specialists entails supporting a complex but safety-critical decision process. It is common that decisions in this domain can be decomposed to a combination of many smaller decisions. In this paper, we present Verification Staircase – a design strategy that can be used for such scenarios. The verification staircase is when multiple interactive assistive tools are combined to allow for a nuanced amount of automation to aid the user. This can support a wide range of prediction quality scenarios, spanning from unproblematic minor mistakes to misleading major failures. By presenting the information in a hierarchical way, the user is able to learn how underlying predictions are connected to overall case predictions, and over time, calibrate their trust so that they can choose the appropriate level of automatic support.

Place, publisher, year, edition, pages
Aachen: CEUR-WS.org , 2020. Vol. 2582
Series
CEUR Workshop Proceedings, ISSN 1613-0073
Keywords [en]
digital design, interaction design, machine learning, artificial intelligence, explainable artificial intelligence
National Category
Design Human Computer Interaction Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-165763OAI: oai:DiVA.org:liu-165763DiVA, id: diva2:1431361
Conference
Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies co-located with 25th International Conference on Intelligent User Interfaces (IUI 2020), Cagliari, Italy, March 17–20, 2020
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Accepted for oral presentation at Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies co-located with 25th International Conference on Intelligent User Interfaces (IUI 2020), Cagliari, Italy, March 17, 2020

Available from: 2020-05-20 Created: 2020-05-20 Last updated: 2025-02-25Bibliographically approved
In thesis
1. Designing with Machine Learning in Digital Pathology: Augmenting Medical Specialists through Interaction Design
Open this publication in new window or tab >>Designing with Machine Learning in Digital Pathology: Augmenting Medical Specialists through Interaction Design
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Recent advancements in machine learning (ML) have led to a dramatic increase in AI capabilities for medical diagnostic tasks. Despite technical advances, developers of predictive AI models struggle to integrate their work into routine clinical workflows. Inefficient human-AI interactions, poor sociotechnical fit and a lack of interactive strategies for dealing with the imperfect nature of predictions are known factors contributing to this lack of adoption.

User-centred design methods are typically aimed at discovering and realising desirable qualities in use, pragmatically oriented around finding solutions despite the limitations of material- and human resources. However, existing methods often rely on designers possessing knowledge of suitable interactive metaphors and idioms, as well as skills in evaluating ideas through low-fidelity prototyping and rapid iteration methods—all of which are challenged by the data-driven nature of machine learning and the unpredictable outputs from AI models.

Using a constructive design research approach, my work explores how we might design systems with AI components that aid clinical decision-making in a human-centred and iterative fashion. Findings are derived from experiments and experiences from four exploratory projects conducted in collaboration with professional physicians, all aiming to probe this design space by producing novel interactive systems for or with ML components.

Contributions include identifying practical and theoretical design challenges, suggesting novel interaction strategies for human-AI collaboration, framing ML competence for designers and presenting empirical descriptions of conducted design processes. Specifically, this compilation thesis contains three works that address effective human-machine teaching and two works that address the challenge of designing interactions that afford successful decision-making despite the uncertainty and imperfections inherent in machine predictions.

Finally, two works directly address design-researchers working with ML, arguing for a systematic approach to increase the repertoire available for theoretical annotation and understanding of the properties of ML as a designerly material.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2021
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2157
National Category
Design Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-176117 (URN)10.3384/diss.diva-176117 (DOI)978-91-7929-604-9 (ISBN)
Public defence
2021-09-23, K3, Kåkenhus, Campus Norrköping, Norrköping, 09:00 (English)
Opponent
Supervisors
Available from: 2021-08-30 Created: 2021-06-07 Last updated: 2025-02-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

http://ceur-ws.org/Vol-2582/paper5.pdf

Authority records

Lindvall, Martin

Search in DiVA

By author/editor
Lindvall, Martin
By organisation
Media and Information TechnologyCenter for Medical Image Science and Visualization (CMIV)Faculty of Science & Engineering
DesignHuman Computer InteractionComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 236 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf