liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
FairX: A comprehensive benchmarking tool for model analysis using fairness, utility, and explainability
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering. (Reasoning and Learning Lab)ORCID iD: 0000-0001-5307-997X
Northeastern University.
Linköping University, Department of Computer and Information Science, Human-Centered Systems. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0001-6356-045X
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering. (Reasoning and Learning Lab)ORCID iD: 0000-0002-9595-2471
2024 (English)In: Proceedings of the 2nd Workshop on Fairness and Bias in AI, co-located with 27th European Conference on Artificial Intelligence (ECAI 2024) / [ed] Roberta Calegari,Virginia Dignum, Barry O'Sullivan, CEUR , 2024, Vol. 3808, article id 16Conference paper, Published paper (Refereed)
Abstract [en]

We present FairX, an open-source Python-based benchmarking tool designed for the comprehensive analysis of models under the umbrella of fairness, utility, and eXplainability (XAI). FairX enables users to train benchmarking bias-mitigation models and evaluate their fairness using a wide array of fairness metrics, data utility metrics, and generate explanations for model predictions, all within a unified framework. Existing benchmarking tools do not have the way to evaluate synthetic data generated from fair generative models, also they do not have the support for training fair generative models either. In FairX, we add fair generative models in the collection of our fair-model library (pre-processing, in-processing, post-processing) and evaluation metrics for evaluating the quality of synthetic fair data. This version of FairX supports both tabular and image datasets. It also allows users to provide their own custom datasets. The open-source FairX benchmarking package is publicly available at https://github.com/fahim-sikder/FairX.

Place, publisher, year, edition, pages
CEUR , 2024. Vol. 3808, article id 16
Series
CEUR Workshop Proceedings, ISSN 1613-0073
Keywords [en]
Data Fairness, Benchmarking, Synthetic Data, Evaluation
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-209224OAI: oai:DiVA.org:liu-209224DiVA, id: diva2:1911064
Conference
2nd Workshop on Fairness and Bias in AI (AEQUITAS), co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
Funder
Knut and Alice Wallenberg FoundationAvailable from: 2024-11-06 Created: 2024-11-06 Last updated: 2024-11-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Paper

Authority records

Sikder, Md Fahimde Leng, DanielHeintz, Fredrik

Search in DiVA

By author/editor
Sikder, Md Fahimde Leng, DanielHeintz, Fredrik
By organisation
Artificial Intelligence and Integrated Computer SystemsFaculty of Science & EngineeringHuman-Centered Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 110 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf