liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Visualization for Trust in Machine Learning Revisited: The State of the Field in 2023
Northwestern University, USA.ORCID iD: 0000-0002-9079-2376
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (iVis, INV)ORCID iD: 0000-0002-1907-7820
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM). (iVis, INV)ORCID iD: 0000-0002-0519-2537
2024 (English)In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756, Vol. 44, no 3, p. 99-113Article in journal (Refereed) Published
Abstract [en]

Visualization for explainable and trustworthy machine learning remains one of the most important and heavily researched fields within information visualization and visual analytics with various application domains, such as medicine, finance, and bioinformatics. After our 2020 state-of-the-art report comprising 200 techniques, we have persistently collected peer-reviewed articles describing visualization techniques, categorized them based on the previously established categorization schema consisting of 119 categories, and provided the resulting collection of 542 techniques in an online survey browser. In this survey article, we present the updated findings of new analyses of this dataset as of fall 2023 and discuss trends, insights, and eight open challenges for using visualizations in machine learning. Our results corroborate the rapidly growing trend of visualization techniques for increasing trust in machine learning models in the past three years, with visualization found to help improve popular model explainability methods and check new deep learning architectures, for instance.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2024. Vol. 44, no 3, p. 99-113
Keywords [en]
trustworthy machine learning, visualization, interpretable machine learning, explainable machine learning
National Category
Computer Sciences
Research subject
Computer Science, Information and software visualization
Identifiers
URN: urn:nbn:se:liu:diva-200588DOI: 10.1109/MCG.2024.3360881ISI: 001252800600004PubMedID: 38294921OAI: oai:DiVA.org:liu-200588DiVA, id: diva2:1833507
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

Funding Agencies|ELLIIT environmentfor strategic research in Sweden

Available from: 2024-02-01 Created: 2024-02-01 Last updated: 2024-08-28

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMed

Authority records

Kucher, KostiantynKerren, Andreas

Search in DiVA

By author/editor
Chatzimparmpas, AngelosKucher, KostiantynKerren, Andreas
By organisation
Media and Information TechnologyFaculty of Science & Engineering
In the same journal
IEEE Computer Graphics and Applications
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 87 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf