liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Publications (10 of 71) Show all publications
Swedish National Data Service, . (2025). Managing and publishing synthetic research data.
Open this publication in new window or tab >>Managing and publishing synthetic research data
2025 (English)Report (Other (popular science, discussion, etc.))
Abstract [en]

This document provides guidance on organizing and documenting datasets that contain synthetic data to simplify publication in a research data repository. Unlike datasets collected from the "real world", synthetic data often require additional details to facilitate reproduction and reuse. This document summarizes the essential information that you should provide when sharing synthetic data in a research data repository to ensure that the data can be easily understood and efficiently reused by others.  In many cases, synthetic data must be handled differently if it is based on personal data, and a section specifically addressing synthetic personal data is included. 

Keywords
Synthetic data, Artificial intelligence, Data Management
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-212513 (URN)10.5281/zenodo.14887525 (DOI)
Note

Funding

Swedish Research Council: Swedish National Data Service (SND)2021-00165_VR 

Linköping University: Verifiering för nyttiggörande (VFN)

Available from: 2025-03-24 Created: 2025-03-24 Last updated: 2025-04-02
Johnson, E., Rayner, D., Kasmire, J., Hennetier, V., Hajisharif, S. & Ström, H. (2025). Metadata/README elements for synthetic structured data made with GenAI: Recommendations to data repositories to encourage transparent, reproducible, and responsible data sharing. AI Policy Exchange Forum (AIPEX)
Open this publication in new window or tab >>Metadata/README elements for synthetic structured data made with GenAI: Recommendations to data repositories to encourage transparent, reproducible, and responsible data sharing
Show others...
2025 (English)Report (Other (popular science, discussion, etc.))
Abstract [en]

Publication of AI-generated synthetic structural data in data repositories is beginning to reveal the specific documentation elements that need to accompany synthetic datasets so as to ensure reproducibility and enable data reuse. This document identifies actions that research repositories can take to encourage users to provide AI-generated synthetic datasets with appropriate structure and documentation. The recommendations are specifically for AI generated data, not (for example) data produced using pre-configured models or missing data created by statistical inference. Additionally, this document discusses metadata/README elements for synthetic structured datasets (tabular and multi-modal) and not textual data from LLMs or images for computer vision. 

The document is the result of a workshop held on 23rd January 2025, with participants from the Swedish National Data Service, Linköping University and Manchester University. It also draws on survey responses about current practice from 17 data repositories and a review of existing metadata and README requirements. 

Place, publisher, year, edition, pages
AI Policy Exchange Forum (AIPEX), 2025
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-212766 (URN)10.63439/MPEW5336 (DOI)
Funder
Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS)
Available from: 2025-04-02 Created: 2025-04-02 Last updated: 2025-04-11
Lee, F., Hajisharif, S. & Johnson, E. (2025). The ontological politics of synthetic data: Normalities, outliers, and intersectional hallucinations. Big Data and Society, 12(2)
Open this publication in new window or tab >>The ontological politics of synthetic data: Normalities, outliers, and intersectional hallucinations
2025 (English)In: Big Data and Society, E-ISSN 2053-9517, Vol. 12, no 2Article in journal (Refereed) Published
Abstract [en]

Synthetic data is increasingly used as a substitute for real data due to ethical, legal, and logistical reasons. However, the rise of synthetic data also raises critical questions about its entanglement with the politics of classification and the reproduction of social norms and categories. This paper aims to problematize the use of synthetic data by examining how its production is intertwined with the maintenance of certain worldviews and classifications. We argue that synthetic data, like real data, is embedded with societal biases and power structures, leading to the reproduction of existing social inequalities. Through empirical examples, we demonstrate how synthetic data tends to highlight majority elements as the “normal” and minimize minority elements, and that the slight changes to the data structures that create synthetic data will also inevitably result in what we term “intersectional hallucinations.” These hallucinations are inherent to synthetic data and cannot be entirely eliminated without compromising the purpose of creating synthetic datasets. We contend that decisions about synthetic data involve determining which intersections are essential and which can be disregarded, a practice which will imbue these decisions with norms and values. Our study underscores the need for critical engagement with the mathematical and statistical choices in synthetic data production and advocates for careful consideration of the ontological and political implications of these choices during curatorial style production of synthetic structured data.

National Category
Information Systems, Social aspects Other Computer and Information Science
Identifiers
urn:nbn:se:liu:diva-212985 (URN)10.1177/20539517251318289 (DOI)
Funder
Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS)
Available from: 2025-04-14 Created: 2025-04-14 Last updated: 2025-04-14
Dehdarirad, T., Johnson, E., Eilertsen, G. & Hajisharif, S. (2024). Enhancing Tabular GAN Fairness: The Impact of Intersectional Feature Selection. In: : . Paper presented at International Conference on Machine Learning and Applications (ICMLA).
Open this publication in new window or tab >>Enhancing Tabular GAN Fairness: The Impact of Intersectional Feature Selection
2024 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Traditional GAN (Generative Adversarial Network) architectures often reproduce biases present in their training data, leading to synthetic data that may unfairly impact certain subgroups. Past efforts to improve fairness in GANs usually target single demographic categories, like sex or race, but overlook intersectionality. Our approach addresses this gap by integrating an intersectionality framework with explainability techniques to identify and select problematic sensitive features. These insights are then used to develop intersectional fairness constraints integrated into the GAN training process. We aim to enhance fairness and maintain diverse subgroup representation by addressing intersections of multiple demographic attributes. Specifically, we adjusted the loss functions of two state-of-the-art GAN models for tabular data, including an intersectional demographic parity constraint. Our evaluations indicate that this approach significantly improves fairness in synthetically generated datasets. We compared the outcomes using Adult, and Diabetes datasets when considering the intersection of two sensitive features versus focusing on a single sensitive attribute, demonstrating the effectiveness of our method in capturing more complex biases.

Keywords
synthetic data generation, generative adversarial networks, fairness, machine learning, intersectionality
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-211981 (URN)
Conference
International Conference on Machine Learning and Applications (ICMLA)
Funder
Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS)Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-03-01 Created: 2025-03-01 Last updated: 2025-03-14Bibliographically approved
Johnson, E. (2024). ‘Intersectional hallucinations’: why AI struggles to understand that a six-year-old can’t be a doctor or claim a pension. The Conversation UK
Open this publication in new window or tab >>‘Intersectional hallucinations’: why AI struggles to understand that a six-year-old can’t be a doctor or claim a pension
2024 (English)In: The Conversation UK, ISSN 2201-5639Article in journal, Editorial material (Other academic) Published
Place, publisher, year, edition, pages
The Conversation Media Group Ltd, 2024
Identifiers
urn:nbn:se:liu:diva-206844 (URN)
Note

Funding Agencies| Funding for this research has been provided by WASP-HS, Vinnova - The Swedish Innovation Agency, and Linköping University. Ericka Johnson is the co-program director of the Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS), where she leads the national Graduate School. She and Saghi Hajisharif are co-founders of the spin-off Fair AI Data. 

Available from: 2024-08-23 Created: 2024-08-23 Last updated: 2024-08-23
Johnson, E. (2024). Pepper as Imposter. Science & Technology Studies, 37(3), 62-70
Open this publication in new window or tab >>Pepper as Imposter
2024 (English)In: Science & Technology Studies, E-ISSN 2243-4690, Vol. 37, no 3, p. 62-70Article in journal, Editorial material (Other academic) Published
Abstract [en]

“An imposter is commonly understood as a person who pretends to be someone else in orderto deceive others” (Vogel et al., 2021: 3). This isthe starting point of Woolgar and colleagues’(2021) recent work on imposters, in which theyexplore how thinking with imposters can be auseful analytic for social theory, i.e. a tool or lensthrough which to observe social-material phenomena. In the book, they trace early sociologicaluse of imposters to articulate (underlying and/orperformative) social orders, and how impostering was initially seen as an example of deviationfrom the normal. In these early uses, examplesof impostering could be interpreted for clues towhich mechanisms held together the social order.However, their reworking of the term imposteringmoves the figure of the imposter to ‘center stage’and uses it to explore indeterminacy, uncertaintyand disorder, the frictions and disruptions thatare actually central to social relations (Vogel et al.,2021: 4). Rather than using it to discover underlying normative mechanisms, this new use of impostering keeps the analytical focus on the messypractices of social relations but also encouragesanalysis of which other actors are collaborating inthe impostering practices, and what purposes theimposter is supposed to serve. 

Place, publisher, year, edition, pages
Finnish Society for Science and Technology Studies, 2024
National Category
Gender Studies
Identifiers
urn:nbn:se:liu:diva-202626 (URN)10.23987/sts.121864 (DOI)001318229000004 ()
Note

Funding agency: the Wallenberg AI, Autonomous Systems and Software Program –Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation

Available from: 2024-04-17 Created: 2024-04-17 Last updated: 2024-10-04
Harrison, K. & Johnson, E. (2023). Affective Corners as a Problematic for Design Interactions. ACM Transactions on Human-Robot Interaction, 12(4), Article ID 41.
Open this publication in new window or tab >>Affective Corners as a Problematic for Design Interactions
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 41Article in journal (Refereed) Published
Abstract [en]

Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Social robotics; design; affect; Law social and behavioural science; Machine learning; Human-centred computing; HCI design and evaluation methods; Robotics; User characteristics
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-195533 (URN)10.1145/3596452 (DOI)001077335300001 ()
Note

Funding agencies: Wallenberg AI, Autonomous Systems and Software Program –Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation.

Available from: 2023-06-21 Created: 2023-06-21 Last updated: 2023-11-03
Gleisner, J. & Johnson, E. (2023). Caring for affective subjects produced in intimate healthcare examinations. Health, 27(3), 302-322
Open this publication in new window or tab >>Caring for affective subjects produced in intimate healthcare examinations
2023 (English)In: Health, ISSN 1363-4593, Vol. 27, no 3, p. 302-322Article in journal (Refereed) Published
Abstract [en]

This article is about the feelings – affect – induced by the digital rectal exam of theprostate and the gynaecological bimanual pelvic exam, and the care doctors are orare not instructed to give. The exams are both invasive, intimate exams located ata part of the body often charged with norms and emotions related to gender andsexuality. By using the concept affective subject, we analyse how these examinations aretaught to medical students, bringing attention to how bodies and affect are cared foras patients are observed and touched. Our findings show both the role care practicesplay in generating and handling affect in the students’ learning and the importance ofthe affect that the exam is (or is not) imagined to produce in the patient. Ours is amaterial-discursive analysis that includes the material affordances of the patient anddoctor bodies in the affective work spaces observed.

Place, publisher, year, edition, pages
Sage Publications, 2023
Keywords
affect, body, care, education, materiality
National Category
Peace and Conflict Studies Other Social Sciences not elsewhere specified Gender Studies Social Anthropology
Identifiers
urn:nbn:se:liu:diva-175925 (URN)10.1177/13634593211020072 (DOI)000655995400001 ()34041941 (PubMedID)
Funder
Swedish Research Council, 2013-8048
Note

Funding: Swedish Research CouncilSwedish Research CouncilEuropean Commission [Dnr 2013-8048]

Available from: 2021-05-27 Created: 2021-05-27 Last updated: 2025-02-20Bibliographically approved
Eidenskog, M., Leifler, O., Sefyrin, J., Johnson, E. & Asplund, M. (2023). Changing the world one engineer at a time – unmaking the traditional engineering education when introducing sustainability subjects. International Journal of Sustainability in Higher Education, 24(9), 70-84
Open this publication in new window or tab >>Changing the world one engineer at a time – unmaking the traditional engineering education when introducing sustainability subjects
Show others...
2023 (English)In: International Journal of Sustainability in Higher Education, ISSN 1467-6370, E-ISSN 1758-6739, Vol. 24, no 9, p. 70-84Article in journal (Refereed) Published
Abstract [en]

Purpose: The information technology (IT) sector has been seen as central to society's transformation to a more just and sustainable society, which underlines teachers’ responsibility to foster engineers who can contribute specifically to such ends. This study aims to report an effort to significantly update an existing engineering programme in IT with this ambition and to analyse the effects and challenges associated with the transformation.

Design/methodology/approach: This study is based on a combination of action-oriented research based on implementing key changes to the curriculum; empirical investigations including surveys and interviews with students and teachers, and analysis of these; and a science and technology studies-inspired analysis.

Findings: Respondents were generally positive towards adding topics relating to sustainability. However, in the unmaking of traditional engineering subjects, changes created a conflict between core versus soft subjects in which the core subjects tended to gain the upper hand. This conflict can be turned into productive discussions by focusing on what kinds of engineers the authors’ educate and how students can be introduced to societal problems as an integrated part of their education.

Practical implications: This study can be helpful for educators in the engineering domain to support them in their efforts to transition from a (narrow) focus on traditional disciplines to one where the bettering of society is at the core.

Originality/value: This study provides a novel approach to the transformation of engineering education through a theoretical analysis seldom used in studies of higher education on a novel case study.

Place, publisher, year, edition, pages
EMERALD GROUP PUBLISHING LTD, 2023
Keywords
Sustainability; Information technology; Science and technology studies; Software engineering education; Unmaking education
National Category
Didactics
Identifiers
urn:nbn:se:liu:diva-191661 (URN)10.1108/ijshe-03-2022-0071 (DOI)000926901000001 ()
Available from: 2023-02-07 Created: 2023-02-07 Last updated: 2023-04-17Bibliographically approved
Winkle, K., McMillan, D., Arnelid, M., Balaam, M., Harrison, K., Johnson, E. & Leite, I. (2023). Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRI. In: Ginevra Castellano, Laurel Riek, Maya Cakmak, Iolanda Leite (Ed.), Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at ACM/IEEE Human Robot Interaction 2023, Stockholm 13 March 2023 through 16 March 2023 (pp. 72-82). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRI
Show others...
2023 (English)In: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction / [ed] Ginevra Castellano, Laurel Riek, Maya Cakmak, Iolanda Leite, Association for Computing Machinery (ACM) , 2023, p. 72-82Conference paper, Published paper (Refereed)
Abstract [en]

Human-Robot Interaction (HRI) is inherently a human-centric field of technology. The role of feminist theories in related fields (e.g. Human-Computer Interaction, Data Science) are taken as a starting point to present a vision for Feminist HRI which can support better, more ethical HRI practice everyday, as well as a more activist research and design stance. We first define feminist design for an HRI audience and use a set of feminist principles from neighboring fields to examine existent HRI literature, showing the progress that has been made already alongside some additional potential ways forward. Following this we identify a set of reflexive questions to be posed throughout the HRI design, research and development pipeline, encouraging a sensitivity to power and to individuals' goals and values. Importantly, we do not look to present a definitive, fixed notion of Feminist HRI, but rather demonstrate the ways in which bringing feminist principles to our field can lead to better, more ethical HRI, and to discuss how we, the HRI community, might do this in practice.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Series
ACM/IEEE International Conference on Human-Robot Interaction (HRI), ISSN 2167-2121, E-ISSN 2167-2148
Keywords
feminism, research methodology, design methodology
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-195859 (URN)10.1145/3568162.3576973 (DOI)2-s2.0-85150374447 (Scopus ID)978-1-4503-9964-7 (ISBN)
Conference
ACM/IEEE Human Robot Interaction 2023, Stockholm 13 March 2023 through 16 March 2023
Note

Funding agencies: Digital Futures, and the Wallenberg AI, Autonomous Systems and Software Program—Humanities andS ociety (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation. 

Available from: 2023-06-27 Created: 2023-06-27 Last updated: 2023-06-27
Projects
Swedish network for the medical humanities [2021-01887_Forte]; Uppsala University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5041-5018

Search in DiVA

Show all publications