liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Ziemke, Tom, ProfessorORCID iD iconorcid.org/0000-0001-6883-2450
Publications (10 of 44) Show all publications
Chilufya, E. M., Arvola, M., Severinsson, S., Martín Bylund, A., Stenliden, L., Mortazavi, A. & Ziemke, T. (2025). The BookBot Project: Conceptual Design of a Social Robot Facilitating Reading Motivation. In: Asbjørn Følstad, Symeon Papadopoulos, Theo Araujo, Effie L.-C. Law, Ewa Luger, Sebastian Hobert, Petter Bae Brandtzaeg (Ed.), Chatbots and Human-Centered AI: 8th International Workshop, CONVERSATIONS 2024, Thessaloniki, Greece, December 4–5, 2024, Revised Selected Papers. Paper presented at CONVERSATIONS 2024, Thessaloniki, Greece, December 4–5, 2024 (pp. 132-149). Springer
Open this publication in new window or tab >>The BookBot Project: Conceptual Design of a Social Robot Facilitating Reading Motivation
Show others...
2025 (English)In: Chatbots and Human-Centered AI: 8th International Workshop, CONVERSATIONS 2024, Thessaloniki, Greece, December 4–5, 2024, Revised Selected Papers / [ed] Asbjørn Følstad, Symeon Papadopoulos, Theo Araujo, Effie L.-C. Law, Ewa Luger, Sebastian Hobert, Petter Bae Brandtzaeg, Springer, 2025, p. 132-149Conference paper, Published paper (Refereed)
Abstract [en]

The motivation to read among children in Nordic countries has seen a noticeable decline in recent years. This study explores the design of social robots for stimulating interest in reading among fourth-grade students (age 10–11). We used a combination of conceptual design methods to engage teachers and students from four classes in two Swedish schools in co-creation workshops. Ideas on functions, qualities, and robot designs were generated, and based on this a set of ten distinct design concepts were created: the Facilitator, the Librarian, the Coach, the Buddy, the Assistant, the Narrator, the Creator, the Apprentice, the Portable, and the Gamer. The strengths and weaknesses of different designs were evaluated, resulting in a final design named ‘The BookBot,’ aiming to inspire and engage students in book reading through discussions of book content, character portrayal, and personalised book recommendations.

Place, publisher, year, edition, pages
Springer, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15545
Keywords
Conceptual design; Reading motivation; Social robot
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:liu:diva-213350 (URN)10.1007/978-3-031-88045-2_9 (DOI)2-s2.0-105002920344 (Scopus ID)9783031880445 (ISBN)9783031880452 (ISBN)
Conference
CONVERSATIONS 2024, Thessaloniki, Greece, December 4–5, 2024
Available from: 2025-04-30 Created: 2025-04-30 Last updated: 2025-04-30
Ziemke, T. (2025). The role of intentionality in human-robot interaction. In: Ericka Johnson (Ed.), How that robot made me feel: (pp. 37-62). Cambridge: MIT Press, Sidorna 37-62
Open this publication in new window or tab >>The role of intentionality in human-robot interaction
2025 (English)In: How that robot made me feel / [ed] Ericka Johnson, Cambridge: MIT Press, 2025, Vol. Sidorna 37-62, p. 37-62Chapter in book (Other academic)
Place, publisher, year, edition, pages
Cambridge: MIT Press, 2025
National Category
Sociology
Identifiers
urn:nbn:se:liu:diva-215051 (URN)9780262381437 (ISBN)
Available from: 2025-06-18 Created: 2025-06-18 Last updated: 2025-06-18Bibliographically approved
Babel, F., Thellman, S. & Ziemke, T. (2024). Who’s Behind the Service Robot? The Impact of Avatars on Priority Decisions. In: Palinko, O. et al. (Ed.), : . Paper presented at International Conference on Social Robotics 2024.
Open this publication in new window or tab >>Who’s Behind the Service Robot? The Impact of Avatars on Priority Decisions
2024 (English)In: / [ed] Palinko, O. et al., 2024Conference paper, Published paper (Refereed)
Abstract [en]

Service robots, like delivery or telepresence robots, often act as proxies for people, though this representation is not always obvious to observers. For instance, when a delivery robot cuts in line at a grocery store, the identity of the person it represents matters. The lower social status often assigned to service robots (“it’s just a robot, it can wait”) can become problematic in human-robot conflicts, if the represented individual (e.g., senior citizen) is treated unfairly. Previous research indicates that people tend to overlook the human behind the robot, potentially disadvantaging the service recipient. This study aimed to determine whether people are more likely to grant priority to a delivery robot when informed about the person it represents. In a between-subjects virtual reality (VR) experiment the recipient was either made apparent with an avatar or not and participants had to choose if they would give priority to a delivery robot in a grocery store queue. Results showed that while the presence of the avatar increased participants’ self-reported consideration of the robot’s human recipient, it did not lead to higher compliance, as the willingness to grant priority was unexpectedly high in both conditions. The results suggest that social desirability and the novelty of human-robot interaction (HRI) may explain these outcomes and future research is discussed.

Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15561
Keywords
human-robot interaction, service robots, avatar, virtual reality, priority decision, conflict, social norms
National Category
Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-213329 (URN)
Conference
International Conference on Social Robotics 2024
Available from: 2025-04-28 Created: 2025-04-28 Last updated: 2025-05-07
Chilufya, E. M., Arvola, M. & Ziemke, T. (2023). A Comparative Study of Physical and Virtual Reality Prototyping of a Migrating Agent Interface. In: Proceedings of the 11th International Conference on Human-Agent Interaction: . Paper presented at HAI '23: International Conference on Human-Agent Interaction, Gothenburg, Sweden, December 4 - 7, 2023 (pp. 369-371). New York, NY, USA: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>A Comparative Study of Physical and Virtual Reality Prototyping of a Migrating Agent Interface
2023 (English)In: Proceedings of the 11th International Conference on Human-Agent Interaction, New York, NY, USA: Association for Computing Machinery (ACM), 2023, p. 369-371Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Prototyping methods are commonly employed iteratively throughout the design and product development, typically ranging from early low-fidelity to later high-fidelity prototypes. We present a case study focusing on prototyping a receptionist agent migrating between three platforms (a monitor on the wall, a mobile phone, and a physical robot). More specifically, we compare virtual reality (VR) and physical (real world) prototyping methods. The two methods are compared in terms of fidelity and usability. The breadth of features, the degree of functionality, and the interactivity were similar. However, the aesthetic refinement differed. The VR prototyping method also had much higher prerequisites in terms of equipment and skills, and the learning curve for the designer was steep. Both methods were equally efficient in user testing, but the VR method revealed more usability issues in the efficiency category, while the physical space method revealed more issues in the effectiveness category.

Place, publisher, year, edition, pages
New York, NY, USA: Association for Computing Machinery (ACM), 2023
Keywords
embodiment, interface, design, virtual reality, virtual receptionist, migrating agent, usability, robotic agent, prototyping, fidelity
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-199484 (URN)10.1145/3623809.3623928 (DOI)001148034200048 ()2-s2.0-85180130267 (Scopus ID)9798400708244 (ISBN)
Conference
HAI '23: International Conference on Human-Agent Interaction, Gothenburg, Sweden, December 4 - 7, 2023
Available from: 2023-12-05 Created: 2023-12-05 Last updated: 2025-05-20Bibliographically approved
Ziemke, T. & Thellman, S. (2023). How puzzling is the social artifact puzzle?. Behavioral and Brain Sciences, 46, Article ID e50.
Open this publication in new window or tab >>How puzzling is the social artifact puzzle?
2023 (English)In: Behavioral and Brain Sciences, ISSN 0140-525X, E-ISSN 1469-1825, Vol. 46, article id e50Article in journal (Other academic) Published
Abstract [en]

In this commentary we would like to question (a) Clark and Fischer's characterization of the “social artifact puzzle” – which we consider less puzzling than the authors, and (b) their account of social robots as depictions involving three physical scenes – which to us seems unnecessarily complex. We contrast the authors' model with a more parsimonious account based on attributions.

National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:liu:diva-197273 (URN)10.1017/s0140525x22001571 (DOI)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2023-08-30 Created: 2023-08-30 Last updated: 2025-02-18
Thellman, S., Pettersson, M., Holmgren, A. & Ziemke, T. (2023). In the eyes of the beheld: Do people think that self-driving cars see what human drivers see?. In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, March 13 - 16, 2023 (pp. 612-616). New York, NY, USA: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>In the eyes of the beheld: Do people think that self-driving cars see what human drivers see?
2023 (English)In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: Association for Computing Machinery (ACM), 2023, p. 612-616Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Safe interaction with automated vehicles requires that human road users understand the differences between the capabilities and limitations of human drivers and their artificial counterparts. Here we explore how people judge what self-driving cars versus human drivers can perceive by engaging online study participants in visual perspective taking toward a car pictured in various traffic scenes. The results indicate that people do not expect self-driving cars to differ significantly from human drivers in their capability to perceive objects in the environment. This finding is important because unmet expectations can result in detrimental interaction outcomes, such as traffic accidents. The extent to which people are able to calibrate their expectations remains an open question for future research.

Place, publisher, year, edition, pages
New York, NY, USA: Association for Computing Machinery (ACM), 2023
Keywords
human-vehicle interaction, perceptual belief, mental state attribution, perspective taking
National Category
Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-197633 (URN)10.1145/3568294.3580158 (DOI)001054975700121 ()2-s2.0-85150443119 (Scopus ID)9781450399708 (ISBN)
Conference
2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, March 13 - 16, 2023
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

Funding: ELLIIT, the Excellence Center at Linkoping-Lund in Information Technology

Available from: 2023-09-05 Created: 2023-09-05 Last updated: 2025-02-05
Thellman, S., Holmgren, A., Pettersson, M. & Ziemke, T. (2023). Out of Sight, Out of Mind? Investigating People's Assumptions About Object Permanence in Self-Driving Cars. In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, March 13 - 16, 2023 (pp. 602-606). New York, NY, USA: ACM Digital Library
Open this publication in new window or tab >>Out of Sight, Out of Mind? Investigating People's Assumptions About Object Permanence in Self-Driving Cars
2023 (English)In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: ACM Digital Library, 2023, p. 602-606Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Safe and efficient interaction with autonomous road vehicles requires that human road users, including drivers, cyclists, and pedestrians, understand differences between the capabilities and limitations of self-driving vehicles and those of human drivers. In this study, we explore how people judge the ability of self-driving cars versus human drivers to keep track of out-of-sight objects by engaging online study participants in cognitive perspective taking toward a car in an animated traffic scene. The results indicate that people may expect self-driving cars to have similar object permanence capability as human drivers. This finding is important because unmet expectations on autonomous road vehicles can result in undesirable interaction outcomes, such as traffic accidents.

Place, publisher, year, edition, pages
New York, NY, USA: ACM Digital Library, 2023
Keywords
human-vehicle interaction, perceptual belief, mental state attribution, perspective taking
National Category
Human Computer Interaction Robotics and automation
Identifiers
urn:nbn:se:liu:diva-197635 (URN)10.1145/3568294.3580156 (DOI)001054975700119 ()2-s2.0-85150444646 (Scopus ID)9781450399708 (ISBN)
Conference
2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, March 13 - 16, 2023
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Research Council, 2022-04602
Note

Funding: ELLIIT, the Excellence Center at Linkoping-Lund in Information Technology; Swedish Research Council (VR) grant [2022-04602]

Available from: 2023-09-05 Created: 2023-09-05 Last updated: 2025-02-05Bibliographically approved
Ziemke, T. (2023). Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies. Artificial Life, 29(3), 351-366
Open this publication in new window or tab >>Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies
2023 (English)In: Artificial Life, ISSN 1064-5462, E-ISSN 1530-9185, Vol. 29, no 3, p. 351-366Article in journal (Refereed) Published
Abstract [en]

Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on—and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a “suspension of disbelief ” but, on the other hand, also a capacity for the “suspension of belief.” This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela’s notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology.

Place, publisher, year, edition, pages
MIT Press, 2023
Keywords
Attribution, embodiment, grounding, human–robot interaction, intentionality, observer’s grounding problem, social robots
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:liu:diva-197272 (URN)10.1162/artl_a_00404 (DOI)001049363000005 ()36943757 (PubMedID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Research Council, 2022-04602
Note

Funding: ELLIIT; Excellence Center at Linkoping-Lund in Information Technology; Swedish Research Council (VR) [2022-04602]

Available from: 2023-08-30 Created: 2023-08-30 Last updated: 2025-02-18
Thellman, S., Marsja, E., Anund, A. & Ziemke, T. (2023). Will It Yield: Expectations on Automated Shuttle Bus Interactions With Pedestrians and Bicyclists. In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at ACM/IEEE International Conference on Human-Robot Interactionn, March 13–16, 2023, Stockholm, Sweden (pp. 292-296). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Will It Yield: Expectations on Automated Shuttle Bus Interactions With Pedestrians and Bicyclists
2023 (English)In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM), 2023, p. 292-296Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous vehicles that operate on public roads need to be predictable to others, including vulnerable road users. In this study, we asked participants to take the perspective of videotaped pedestrians and cyclists crossing paths with an automated shuttle bus, and to (1) judge whether the bus would stop safely in front of them and (2) report whether the bus's actual stopping behavior accorded with their expectations. The results show that participants expected the bus to brake safely in approximately two thirds of the human-vehicle interactions, more so to pedestrians than cyclists, and that they tended to underestimate rather than overestimate the bus's capability to yield in ways that they considered as safe. These findings have implications for the design and implementation of automated shuttle bus services.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
automated shuttles, transport, expectations, vulnerable road users
National Category
Psychology
Identifiers
urn:nbn:se:liu:diva-192876 (URN)10.1145/3568294.3580091 (DOI)001054975700054 ()2-s2.0-85150443012 (Scopus ID)9781450399708 (ISBN)
Conference
ACM/IEEE International Conference on Human-Robot Interactionn, March 13–16, 2023, Stockholm, Sweden
Note

Funding: ELLIIT, the Excellence Center at Linkoping-Lund in Information Technology; Swedish Research Council (VR) [2022-04602]

Available from: 2023-04-04 Created: 2023-04-04 Last updated: 2023-10-11Bibliographically approved
Axell, C., Berg, A., Hallström, J., Thellman, S. & Ziemke, T. (2022). Artificial Intelligence in Contemporary Children’s Culture: A Case Study. In: David Gill, Jim Tuff, Thomas Kennedy, Shawn Pendergast, Sana Jamil (Ed.), PATT 39: PATT on the Edge Technology, Innovation and Education. Paper presented at PATT 39. PATT on the Edge Technology, Innovation and Education. St. John’s, Newfoundland and Labrador, Canada June 21st-24th, 2022 (pp. 376-386). Memorial University of Newfoundland
Open this publication in new window or tab >>Artificial Intelligence in Contemporary Children’s Culture: A Case Study
Show others...
2022 (English)In: PATT 39: PATT on the Edge Technology, Innovation and Education / [ed] David Gill, Jim Tuff, Thomas Kennedy, Shawn Pendergast, Sana Jamil, Memorial University of Newfoundland , 2022, p. 376-386Conference paper, Published paper (Refereed)
Abstract [en]

The overall aim of the school subject technology is to develop pupils’ understanding of technological solutions in everyday life. A starting point for this study is that it is important for teachers in technology to have knowledge of pupils’ prior conceptions of the subject content since these can both support and hinder their learning. In a previous study we found that when pupils (age 7) talk about digital technology and programming, they often refer to out-of-school experiences such as films, television programmes and books. Typically, their descriptions include robots with some form of intelligence. Hence, it seems like children’s culture may have an impact on the conceptions they bring to the technology classroom. In light of this, it is vital that technology teachers have knowledge about how robots and artificial intelligence (AI) are portrayed in children’s culture, and how pupils perceive these portrayals. However, knowledge about these aspects of technology in children’s culture is limited.The purpose of this study is to investigate how artifacts with artificial intelligence are portrayed in television programmes and literature aimed at children. This study is the first step in a larger study aiming to examine younger pupils’ conceptions and ideas about artificial intelligence. A novice conception of artificial intelligence can be described as an understanding of what a programmed device may, or may not, “understand” in relation to a human, which includes discerning th edifferences between the artificial and the human mind. Consequently, as a theoretical framework for investigating how artificial intelligence is portrayed in children’s culture, the concepts of Theoryof Mind (ToM) and Theory of Artificial Mind (ToAM), are used. The empirical material presented in this paper, i.e. four children’s books and a popular children’s television programme, was analysed using a qualitative thematic analysis. The results show that the portrayal of AI is ambiguous. The structure and function of the robot has elements of both human and machine, and the view of the human fictional characters of the robot is sometimes that of a machine, sometimes of a human. In addition, the whole empirical material includes portrayals of AI as a threat as well as a saviour. As regards implications, there is a risk that without real-life experiences of robots, the representations children’s books and other media convey can lead to ambivalent feelings towards real robots.

Place, publisher, year, edition, pages
Memorial University of Newfoundland, 2022
Keywords
Technology Education, Artificial Intelligence, Children’s Culture, Theory of Mind, Theory of Artificial Mind
National Category
Educational Sciences
Identifiers
urn:nbn:se:liu:diva-189717 (URN)978-0-88901-505-0 (ISBN)
Conference
PATT 39. PATT on the Edge Technology, Innovation and Education. St. John’s, Newfoundland and Labrador, Canada June 21st-24th, 2022
Available from: 2022-11-03 Created: 2022-11-03 Last updated: 2022-11-03
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6883-2450

Search in DiVA

Show all publications