liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Enhancing Tabular GAN Fairness: The Impact of Intersectional Feature Selection
Linköping University, Department of Thematic Studies, The Department of Gender Studies. Linköping University, Faculty of Arts and Sciences.
Linköping University, Department of Thematic Studies, The Department of Gender Studies. Linköping University, Faculty of Arts and Sciences.ORCID iD: 0000-0001-5041-5018
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-9217-9997
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
2024 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Traditional GAN (Generative Adversarial Network) architectures often reproduce biases present in their training data, leading to synthetic data that may unfairly impact certain subgroups. Past efforts to improve fairness in GANs usually target single demographic categories, like sex or race, but overlook intersectionality. Our approach addresses this gap by integrating an intersectionality framework with explainability techniques to identify and select problematic sensitive features. These insights are then used to develop intersectional fairness constraints integrated into the GAN training process. We aim to enhance fairness and maintain diverse subgroup representation by addressing intersections of multiple demographic attributes. Specifically, we adjusted the loss functions of two state-of-the-art GAN models for tabular data, including an intersectional demographic parity constraint. Our evaluations indicate that this approach significantly improves fairness in synthetically generated datasets. We compared the outcomes using Adult, and Diabetes datasets when considering the intersection of two sensitive features versus focusing on a single sensitive attribute, demonstrating the effectiveness of our method in capturing more complex biases.

Place, publisher, year, edition, pages
2024.
Keywords [en]
synthetic data generation, generative adversarial networks, fairness, machine learning, intersectionality
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:liu:diva-211981OAI: oai:DiVA.org:liu-211981DiVA, id: diva2:1941612
Conference
International Conference on Machine Learning and Applications (ICMLA)
Funder
Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS)Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2025-03-01 Created: 2025-03-01 Last updated: 2025-03-14Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Dehdarirad, TaherehJohnson, ErickaEilertsen, GabrielHajisharif, Saghi

Search in DiVA

By author/editor
Dehdarirad, TaherehJohnson, ErickaEilertsen, GabrielHajisharif, Saghi
By organisation
The Department of Gender StudiesFaculty of Arts and SciencesMedia and Information TechnologyFaculty of Science & Engineering
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 126 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf