liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Heintz, Fredrik, ProfessorORCID iD iconorcid.org/0000-0002-9595-2471
Publications (10 of 125) Show all publications
Mannila, L., Hallström, J., Nordlöf, C., Heintz, F., Sperling, K. & Stenliden, L. (2025). Framing AI Literacy for K-12 Education: Insights from Multi-Perspective and International Stakeholders. In: ACE '25: Proceedings of the 27th Australasian Computing Education Conference: . Paper presented at ACE '25: The 27th Australasian Computing Education Conference, Brisbane, AUSTRALIA, FEB 12-13, 2025 (pp. 85-94). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Framing AI Literacy for K-12 Education: Insights from Multi-Perspective and International Stakeholders
Show others...
2025 (English)In: ACE '25: Proceedings of the 27th Australasian Computing Education Conference, Association for Computing Machinery (ACM) , 2025, p. 85-94Conference paper, Published paper (Refereed)
Abstract [en]

National and international policy documents emphasize the need for AI-related competencies “for all”, but there is little clarity on what these competencies should include, and determining what non-experts need to know remains a challenge. AI literacy has become a widely discussed topic in this context, often referring to a set of skills that empower individuals to critically evaluate AI, communicate and collaborate effectively with AI systems, and utilize AI as a tool across diverse contexts, including online environments, homes, schools, and workplaces. However, what AI literacy looks like in practice depends on factors such as age, level of education, and individual background. In this article, we frame AI literacy based on a qualitative analysis of the views of 33 international experts from various disciplines on what AI literacy in K-12 education should encompass. This analysis builds on existing AI literacy frameworks, with a focus on understanding and critically evaluating AI’s role in daily life, recognizing and using AI, and designing AI solutions for everyday problems. The findings show that experts emphasize a wide range of knowledge, skills, and attitudes, highlighting the importance of multiple perspectives when exploring this emerging field.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
National Category
Didactics Computer Systems
Identifiers
urn:nbn:se:liu:diva-212950 (URN)10.1145/3716640.3716650 (DOI)001480949300010 ()9798400714252 (ISBN)
Conference
ACE '25: The 27th Australasian Computing Education Conference, Brisbane, AUSTRALIA, FEB 12-13, 2025
Funder
Swedish Research Council, 2022-03553
Note

Funding Agencies|Swedish Research Council

Available from: 2025-04-11 Created: 2025-04-11 Last updated: 2025-06-11
Sperling, K., Stenliden, L., Mannila, L., Hallström, J., Nordlöf, C. & Heintz, F. (2025). Perspectives on AI literacy in Middle School Classrooms: An Integrative Review. Postdigital Science and Education, 7, 719-749
Open this publication in new window or tab >>Perspectives on AI literacy in Middle School Classrooms: An Integrative Review
Show others...
2025 (English)In: Postdigital Science and Education, ISSN 2524-485X, Vol. 7, p. 719-749Article in journal (Refereed) Published
Abstract [en]

AI literacy in school education is booming within the scientific discourse of AI in education. How AI literacy is currently being framed serves diverse educational, political, and commercial purposes influencing how we imagine postdigital classrooms today and in the future. More importantly, how AI literacy emerges in primary education notably impacts how children understand AI and their own agency in a society where AI is ubiquitous. This study reviews how scientific literature conceptualises AI literacy, focusing on middle school students. An AI-adapted literacy framework (GeST) is used in the analysis to distinguish three perspectives of AI literacy (Generic, Situated, and Transformative). Forty-four papers from 2016–2024 were included in the final descriptive and qualitative analysis, showing an exponential growth in scientific papers. While still vaguely defined and poorly theorised, AI literacy materialises into different AI curricula and technology-supported teaching activities. The GeST analysis indicates that AI literacy is primarily viewed as a set of measurable skills related to generalisable theoretical knowledge that is expected to make children more competitive in a globalised and technologised world. Although some papers consider empowering students with specific competencies to challenge the AI development, critical considerations of AI in education is less visible. The paper highlights the necessity to steer the conceptualisation of AI literacy to put a stronger emphasis on critical orientations that enable students as well as teachers to examine claims about AI, and pose ethical questions to its adoption and use in classrooms and beyond.

Keywords
AI literacy · Middle school · Primary education · Postdigital · K-12 classroom
National Category
Social Sciences Didactics
Identifiers
urn:nbn:se:liu:diva-216940 (URN)10.1007/s42438-025-00560-1 (DOI)
Funder
Swedish Research Council, 2022-03553Linköpings universitetSwedish Research Council, 2022-03553Linköpings universitet
Available from: 2025-08-25 Created: 2025-08-25 Last updated: 2025-10-01
Sikder, M. F., Ramachandranpillai, R., de Leng, D. & Heintz, F. (2025). Promoting Intersectional Fairness through Knowledge Distillation. In: Inês Lynce, Nello Murano, Mauro Vallati, Serena Villata, Federico Chesani, Michela Milano, Andrea Omicini, Mehdi Dastani (Ed.), : . Paper presented at 28th European Conference on Artificial Intelligence (ECAI), Bologna, Italy, 2025 (pp. 3427-3434). IOS Press
Open this publication in new window or tab >>Promoting Intersectional Fairness through Knowledge Distillation
2025 (English)In: / [ed] Inês Lynce, Nello Murano, Mauro Vallati, Serena Villata, Federico Chesani, Michela Milano, Andrea Omicini, Mehdi Dastani, IOS Press , 2025, p. 3427-3434Conference paper, Published paper (Refereed)
Abstract [en]

As Artificial Intelligence-driven decision-making systems become increasingly popular, ensuring fairness in their outcomes has emerged as a critical and urgent challenge. AI models, often trained on open-source datasets embedded with human and systemic biases, risk producing decisions that disadvantage certain demographics. This challenge intensifies when multiple sensitive attributes interact, leading to intersectional bias, a compounded and uniquely complex form of unfairness. Over the years, various methods have been proposed to address bias at the data and model levels. However, mitigating intersectional bias in decision-making remains an under-explored challenge. Motivated by this gap, we propose a novel framework that leverages knowledge distillation to promote intersectional fairness. Our approach proceeds in two stages: first, a teacher model is trained solely to maximize predictive accuracy, followed by a student model that inherits the teacher's representational knowledge while incorporating intersectional fairness constraints. The student model integrates tailored loss functions that enforce parity in false positive rates and demographic distributions across intersectional groups, alongside an adversarial objective that minimizes protected attribute information within the learned representation. Empirical evaluation across multiple benchmark datasets demonstrates that we achieve a 52% increase in accuracy for multi-class classification and a 61% reduction in average false positive rate across intersectional groups and outperforms state-of-the-art models. This distillation-based methodology provides a more stable optimization opportunity than direct fairness approaches, resulting in substantially fairer representations, particularly for multiple sensitive attributes and underrepresented demographic intersections.

Place, publisher, year, edition, pages
IOS Press, 2025
Keywords
Data Fairness, Representation Learning, Intersectional Fairness
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:liu:diva-219032 (URN)10.3233/FAIA251214 (DOI)
Conference
28th European Conference on Artificial Intelligence (ECAI), Bologna, Italy, 2025
Funder
Knut and Alice Wallenberg FoundationELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2025-10-25 Created: 2025-10-25 Last updated: 2025-10-30
Sikder, M. F., Ramachandran Pillai, R. & Heintz, F. (2025). TransFusion: Generating long, high fidelity time series using diffusion models with transformers. Machine Learning with Applications, 20, Article ID 100652.
Open this publication in new window or tab >>TransFusion: Generating long, high fidelity time series using diffusion models with transformers
2025 (English)In: Machine Learning with Applications, E-ISSN 2666-8270, Vol. 20, article id 100652Article in journal (Refereed) Published
Abstract [en]

The generation of high-quality, long-sequenced time-series data is essential due to its wide range of applications. In the past, standalone Recurrent and Convolutional Neural Network based Generative Adversarial Networks (GAN) were used to synthesize time-series data. However, they are inadequate for generating long sequences of time-series data due to limitations in the architecture, such as difficulties in capturing long-range dependencies, limited temporal coherence, and scalability challenges. Furthermore, GANs are well known for their training instability and mode collapse problem. To address this, we propose TransFusion, a diffusion, and transformers-based generative model to generate high-quality long sequence time-series data. We extended the sequence length to 384, surpassing the previous limit, and successfully generated high quality synthetic data. Also, we introduce two evaluation metrics to evaluate the quality of the synthetic data as well as its predictive characteristics. TransFusion is evaluated using a diverse set of visual and empirical metrics, consistently outperforming the previous state-of-the-art by a significant margin.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Time Series Generation, Generative Models
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-213111 (URN)10.1016/j.mlwa.2025.100652 (DOI)001472167500001 ()
Funder
Knut and Alice Wallenberg Foundation
Note

Funding Agencies|Knut and Alice Wallenberg Foundation, Sweden; ELLIIT Excellence Center at Linkoping-Lund for Information Technology, Sweden

Available from: 2025-04-18 Created: 2025-04-18 Last updated: 2025-12-18
Wiman, E., Widén, L., Tiger, M. & Heintz, F. (2024). Autonomous 3D Exploration in Large-Scale Environments with Dynamic Obstacles. In: : . Paper presented at International Conference on Robotics and Automation, Yokohama, Japan, 13-17 Maj, 2024..
Open this publication in new window or tab >>Autonomous 3D Exploration in Large-Scale Environments with Dynamic Obstacles
2024 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Exploration in dynamic and uncertain real-world environments is an open problem in robotics and it constitutes a foundational capability of autonomous systems operating in most of the real-world. While 3D exploration planning has been extensively studied, the environments are assumed static or only reactive collision avoidance is carried out. We propose a novel approach to not only avoid dynamic obstacles but also include them in the plan itself, to deliberately exploit the dynamic environment in the agent's favor. The proposed planner, Dynamic AutonomousExploration Planner (DAEP), extends AEP [1] to explicitly plan with respect to dynamic obstacles. Furthermore, addressing prior errors within AEP in DAEP has resulted in enhanced exploration within static environments. To thoroughly evaluate exploration planners in dynamic settings, we propose a new enhanced benchmark suite with several dynamic environments, including large-scale outdoor environments. DAEP outperforms state-of-the-art planners in dynamic and large-scale environments and is shown to be more effective at both exploration and collision avoidance.

Keywords
3D-exploration, dynamic environments, planning under uncertainty, collision avoidance
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-205049 (URN)
Conference
International Conference on Robotics and Automation, Yokohama, Japan, 13-17 Maj, 2024.
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-06-19 Created: 2024-06-19 Last updated: 2025-02-07Bibliographically approved
Wiman, E., Widén, L., Tiger, M. & Heintz, F. (2024). Autonomous 3D Exploration in Large-Scale Environments with Dynamic Obstacles. In: Zhidong Wang (Ed.), 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA) 2024, 13-17 Maj 2024, Yokohama, Japan (pp. 2389-2395). IEEE
Open this publication in new window or tab >>Autonomous 3D Exploration in Large-Scale Environments with Dynamic Obstacles
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA) / [ed] Zhidong Wang, IEEE, 2024, p. 2389-2395Conference paper, Published paper (Refereed)
Abstract [en]

Exploration in dynamic and uncertain real-world environments is an open problem in robotics and it constitutes a foundational capability of autonomous systems operating in most of the real-world. While 3D exploration planning has been extensively studied, the environments are assumed static or only reactive collision avoidance is carried out. We propose a novel approach to not only avoid dynamic obstacles but also include them in the plan itself, to deliberately exploit the dynamic environment in the agent’s favor. The proposed planner, Dynamic Autonomous Exploration Planner (DAEP), extends AEP to explicitly plan with respect to dynamic obstacles. Furthermore, addressing prior errors within AEP in DAEP has resulted in enhanced exploration within static environments. To thoroughly evaluate exploration planners in such settings we propose a new enhanced benchmark suite with several dynamic environments, including large-scale outdoor environments. DAEP outperforms state-of-the-art planners in dynamic and large-scale environments and is shown to be more effective at both exploration and collision avoidance.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
3D-exploration, dynamic environments, planning under uncertainty, collision avoidance
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-206793 (URN)10.1109/ICRA57147.2024.10610996 (DOI)001294576202005 ()2-s2.0-85202446874 (Scopus ID)9798350384574 (ISBN)9798350384581 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA) 2024, 13-17 Maj 2024, Yokohama, Japan
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding Agencies|Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Excellence Center at Linkoping-Lund in Information Technology (ELLIIT)

Available from: 2024-08-22 Created: 2024-08-22 Last updated: 2025-02-07Bibliographically approved
Ramachandranpillai, R., Sikder, M. F., Bergström, D. & Heintz, F. (2024). Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks. The journal of artificial intelligence research, 79, 1313-1341
Open this publication in new window or tab >>Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks
2024 (English)In: The journal of artificial intelligence research, ISSN 1076-9757, E-ISSN 1943-5037, Vol. 79, p. 1313-1341Article in journal (Refereed) Published
Abstract [en]

Synthetic data generation offers a promising solution to enhance the usefulness of Electronic Healthcare Records (EHR) by generating realistic de-identified data. However, the existing literature primarily focuses on the quality of synthetic health data, neglecting the crucial aspect of fairness in downstream predictions. Consequently, models trained on synthetic EHR have faced criticism for producing biased outcomes in target tasks. These biases can arise from either spurious correlations between features or the failure of models to accurately represent sub-groups. To address these concerns, we present Bias-transforming Generative Adversarial Networks (Bt-GAN), a GAN-based synthetic data generator specifically designed for the healthcare domain. In order to tackle spurious correlations (i), we propose an information-constrained Data Generation Process (DGP) that enables the generator to learn a fair deterministic transformation based on a well-defined notion of algorithmic fairness. To overcome the challenge of capturing exact sub-group representations (ii), we incentivize the generator to preserve sub-group densities through score-based weighted sampling. This approach compels the generator to learn from underrepresented regions of the data manifold. To evaluate the effectiveness of our proposed method, we conduct extensive experiments using the Medical Information Mart for Intensive Care (MIMIC-III) database. Our results demonstrate that Bt-GAN achieves state-of-the-art accuracy while significantly improving fairness and minimizing bias amplification. Furthermore, we perform an in-depth explainability analysis to provide additional evidence supporting the validity of our study. In conclusion, our research introduces a novel and professional approach to addressing the limitations of synthetic data generation in the healthcare domain. By incorporating fairness considerations and leveraging advanced techniques such as GANs, we pave the way for more reliable and unbiased predictions in healthcare applications.

Place, publisher, year, edition, pages
AAAI Press, 2024
Keywords
Fair data generation, Trustworthy AI, Synthetic data generation, MIMIC-III, EHR
National Category
Computer Systems
Identifiers
urn:nbn:se:liu:diva-203151 (URN)10.1613/jair.1.15317 (DOI)001218386100001 ()
Note

Funding Agencies|Knut and Alice Wallenberg Foundation; ELLIIT Excellence Center at Linkoeping-Lund for Information Technology; TAILOR-an EU project

Available from: 2024-04-30 Created: 2024-04-30 Last updated: 2025-03-30Bibliographically approved
Carlsen, H., Nykvist, B., Joshi, S. & Heintz, F. (2024). Chasing artificial intelligence in shared socioeconomic pathways. One Earth, 7(1), 18-22
Open this publication in new window or tab >>Chasing artificial intelligence in shared socioeconomic pathways
2024 (English)In: One Earth, ISSN 2590-3330, E-ISSN 2590-3322, Vol. 7, no 1, p. 18-22Article in journal, Editorial material (Other academic) Published
Abstract [en]

The development of artificial intelligence has likely reached an inflection point, with significant implications for how research needs to address emerging technologies and how they drive long-term socioeconomic development of importance for climate change scenarios.

Place, publisher, year, edition, pages
CELL PRESS, 2024
National Category
Peace and Conflict Studies Other Social Sciences not elsewhere specified
Identifiers
urn:nbn:se:liu:diva-201493 (URN)10.1016/j.oneear.2023.12.015 (DOI)001171139600001 ()
Note

Funding Agencies|Mistra Geopolitics research program [2016/11]

Available from: 2024-03-12 Created: 2024-03-12 Last updated: 2025-02-20
Sikder, M. F., Ramachandranpillai, R., de Leng, D. & Heintz, F. (2024). FairX: A comprehensive benchmarking tool for model analysis using fairness, utility, and explainability. In: Roberta Calegari,Virginia Dignum, Barry O'Sullivan (Ed.), Proceedings of the 2nd Workshop on Fairness and Bias in AI, co-located with 27th European Conference on Artificial Intelligence (ECAI 2024): . Paper presented at 2nd Workshop on Fairness and Bias in AI (AEQUITAS), co-located with 27th European Conference on Artificial Intelligence (ECAI 2024). CEUR, 3808, Article ID 16.
Open this publication in new window or tab >>FairX: A comprehensive benchmarking tool for model analysis using fairness, utility, and explainability
2024 (English)In: Proceedings of the 2nd Workshop on Fairness and Bias in AI, co-located with 27th European Conference on Artificial Intelligence (ECAI 2024) / [ed] Roberta Calegari,Virginia Dignum, Barry O'Sullivan, CEUR , 2024, Vol. 3808, article id 16Conference paper, Published paper (Refereed)
Abstract [en]

We present FairX, an open-source Python-based benchmarking tool designed for the comprehensive analysis of models under the umbrella of fairness, utility, and eXplainability (XAI). FairX enables users to train benchmarking bias-mitigation models and evaluate their fairness using a wide array of fairness metrics, data utility metrics, and generate explanations for model predictions, all within a unified framework. Existing benchmarking tools do not have the way to evaluate synthetic data generated from fair generative models, also they do not have the support for training fair generative models either. In FairX, we add fair generative models in the collection of our fair-model library (pre-processing, in-processing, post-processing) and evaluation metrics for evaluating the quality of synthetic fair data. This version of FairX supports both tabular and image datasets. It also allows users to provide their own custom datasets. The open-source FairX benchmarking package is publicly available at https://github.com/fahim-sikder/FairX.

Place, publisher, year, edition, pages
CEUR, 2024
Series
CEUR Workshop Proceedings, ISSN 1613-0073
Keywords
Data Fairness, Benchmarking, Synthetic Data, Evaluation
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-209224 (URN)2-s2.0-85209988687 (Scopus ID)
Conference
2nd Workshop on Fairness and Bias in AI (AEQUITAS), co-located with 27th European Conference on Artificial Intelligence (ECAI 2024)
Funder
Knut and Alice Wallenberg Foundation
Available from: 2024-11-06 Created: 2024-11-06 Last updated: 2025-11-03Bibliographically approved
Bonte, P., Calbimonte, J.-P., de Leng, D., Dell'Aglio, D., Della Valle, E., Eiter, T., . . . Ziffer, G. (2024). Grounding Stream Reasoning Research. Transactions on Graph Data and Knowledge (TGDK), 2(1), 1-47, Article ID 2.
Open this publication in new window or tab >>Grounding Stream Reasoning Research
Show others...
2024 (English)In: Transactions on Graph Data and Knowledge (TGDK), ISSN 2942-7517, Vol. 2, no 1, p. 1-47, article id 2Article in journal (Refereed) Published
Abstract [en]

In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the "Stream Reasoning Workshop" to share perspectives, challenges, and experiences around this topic.

In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream.

This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area.

Place, publisher, year, edition, pages
Wadern, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, 2024
Keywords
Stream Reasoning, Stream Processing, RDF streams, Streaming Linked Data, Continuous query processing, Temporal Logics, High-performance computing, Databases
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-203211 (URN)10.4230/TGDK.2.1.2 (DOI)
Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-04-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-9595-2471

Search in DiVA

Show all publications