liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ziemke, Tom
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    How puzzling is the social artifact puzzle?2023In: Behavioral and Brain Sciences, ISSN 0140-525X, E-ISSN 1469-1825, Vol. 46, article id e50Article in journal (Other academic)
    Abstract [en]

    In this commentary we would like to question (a) Clark and Fischer's characterization of the “social artifact puzzle” – which we consider less puzzling than the authors, and (b) their account of social robots as depictions involving three physical scenes – which to us seems unnecessarily complex. We contrast the authors' model with a more parsimonious account based on attributions.

  • 2.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Pettersson, Max
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Holmgren, Aksel
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    In the eyes of the beheld: Do people think that self-driving cars see what human drivers see?2023In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: Association for Computing Machinery (ACM), 2023, p. 612-616Conference paper (Refereed)
    Abstract [en]

    Safe interaction with automated vehicles requires that human road users understand the differences between the capabilities and limitations of human drivers and their artificial counterparts. Here we explore how people judge what self-driving cars versus human drivers can perceive by engaging online study participants in visual perspective taking toward a car pictured in various traffic scenes. The results indicate that people do not expect self-driving cars to differ significantly from human drivers in their capability to perceive objects in the environment. This finding is important because unmet expectations can result in detrimental interaction outcomes, such as traffic accidents. The extent to which people are able to calibrate their expectations remains an open question for future research.

  • 3.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Holmgren, Aksel
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Pettersson, Max
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Out of Sight, Out of Mind? Investigating People's Assumptions About Object Permanence in Self-Driving Cars2023In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: ACM Digital Library, 2023, p. 602-606Conference paper (Refereed)
    Abstract [en]

    Safe and efficient interaction with autonomous road vehicles requires that human road users, including drivers, cyclists, and pedestrians, understand differences between the capabilities and limitations of self-driving vehicles and those of human drivers. In this study, we explore how people judge the ability of self-driving cars versus human drivers to keep track of out-of-sight objects by engaging online study participants in cognitive perspective taking toward a car in an animated traffic scene. The results indicate that people may expect self-driving cars to have similar object permanence capability as human drivers. This finding is important because unmet expectations on autonomous road vehicles can result in undesirable interaction outcomes, such as traffic accidents.

  • 4.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Marsja, Erik
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Disability Research Division.
    Anund, Anna
    The Swedish National Road and Transport Research Institute, Linköping, Sweden.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Will It Yield: Expectations on Automated Shuttle Bus Interactions With Pedestrians and Bicyclists2023In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM), 2023, p. 292-296Conference paper (Refereed)
    Abstract [en]

    Autonomous vehicles that operate on public roads need to be predictable to others, including vulnerable road users. In this study, we asked participants to take the perspective of videotaped pedestrians and cyclists crossing paths with an automated shuttle bus, and to (1) judge whether the bus would stop safely in front of them and (2) report whether the bus's actual stopping behavior accorded with their expectations. The results show that participants expected the bus to brake safely in approximately two thirds of the human-vehicle interactions, more so to pedestrians than cyclists, and that they tended to underestimate rather than overestimate the bus's capability to yield in ways that they considered as safe. These findings have implications for the design and implementation of automated shuttle bus services.

  • 5.
    Axell, Cecilia
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Division of Learning, Aesthetics, Natural Science. Linköping University, Faculty of Educational Sciences.
    Berg, Astrid
    Linköping University, Faculty of Educational Sciences. Linköping University, Department of Behavioural Sciences and Learning, Division of Learning, Aesthetics, Natural Science.
    Hallström, Jonas
    Linköping University, Department of Behavioural Sciences and Learning, Division of Learning, Aesthetics, Natural Science. Linköping University, Faculty of Educational Sciences.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Artificial Intelligence in Contemporary Children’s Culture: A Case Study2022In: PATT 39: PATT on the Edge Technology, Innovation and Education / [ed] David Gill, Jim Tuff, Thomas Kennedy, Shawn Pendergast, Sana Jamil, Memorial University of Newfoundland , 2022, p. 376-386Conference paper (Refereed)
    Abstract [en]

    The overall aim of the school subject technology is to develop pupils’ understanding of technological solutions in everyday life. A starting point for this study is that it is important for teachers in technology to have knowledge of pupils’ prior conceptions of the subject content since these can both support and hinder their learning. In a previous study we found that when pupils (age 7) talk about digital technology and programming, they often refer to out-of-school experiences such as films, television programmes and books. Typically, their descriptions include robots with some form of intelligence. Hence, it seems like children’s culture may have an impact on the conceptions they bring to the technology classroom. In light of this, it is vital that technology teachers have knowledge about how robots and artificial intelligence (AI) are portrayed in children’s culture, and how pupils perceive these portrayals. However, knowledge about these aspects of technology in children’s culture is limited.The purpose of this study is to investigate how artifacts with artificial intelligence are portrayed in television programmes and literature aimed at children. This study is the first step in a larger study aiming to examine younger pupils’ conceptions and ideas about artificial intelligence. A novice conception of artificial intelligence can be described as an understanding of what a programmed device may, or may not, “understand” in relation to a human, which includes discerning th edifferences between the artificial and the human mind. Consequently, as a theoretical framework for investigating how artificial intelligence is portrayed in children’s culture, the concepts of Theoryof Mind (ToM) and Theory of Artificial Mind (ToAM), are used. The empirical material presented in this paper, i.e. four children’s books and a popular children’s television programme, was analysed using a qualitative thematic analysis. The results show that the portrayal of AI is ambiguous. The structure and function of the robot has elements of both human and machine, and the view of the human fictional characters of the robot is sometimes that of a machine, sometimes of a human. In addition, the whole empirical material includes portrayals of AI as a threat as well as a saviour. As regards implications, there is a risk that without real-life experiences of robots, the representations children’s books and other media convey can lead to ambivalent feelings towards real robots.

  • 6.
    Ziemke, Tom
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Do we really want AI to be human-like?2022In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 7, no 68, article id eadd0641Article in journal (Other academic)
    Abstract [en]

    Behavioral variability can be used to make robots more human-like, but we propose that it may be wiser to make them less so.

  • 7.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    de Graaf, Maartje
    Univ Utrecht, Netherlands.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings2022In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 11, no 4, article id 41Article, review/survey (Refereed)
    Abstract [en]

    The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a computer < robot < human pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.

  • 8.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Thunberg, Sofia
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Does Emotional State Affect How People Perceive Robots?2021Conference paper (Refereed)
    Abstract [en]

    Emotions serve important regulatory roles in social interaction. Although recognition, modeling, and expression of emotion have been extensively researched in human-robot interaction and related fields, the role of human emotion in perceptions of and interactions with robots has so far received considerably less attention. We here report inconclusive results from a pilot study employing an affect induction procedure to investigate the effect of people's emotional state on their perceptions of human-likeness and mind in robots, as well as attitudes toward robots. We propose a new study design based on the findings from this study.

  • 9.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Thunberg, Sofia
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Does Emotional State Affect How People Perceive Robots?2021In: HRI 21: COMPANION OF THE 2021 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, ASSOC COMPUTING MACHINERY , 2021, p. 113-115Conference paper (Refereed)
    Abstract [en]

    Emotions serve important regulatory roles in social interaction. Although recognition, modeling, and expression of emotion have been extensively researched in human-robot interaction and related fields, the role of human emotion in perceptions of and interactions with robots has so far received considerably less attention. We here report inconclusive results from a pilot study employing an affect induction procedure to investigate the effect of peoples emotional state on their perceptions of human-likeness and mind in robots, as well as attitudes toward robots. We propose a new study design based on the findings from this study.

  • 10. Order onlineBuy this publication >>
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Social Robots as Intentional Agents2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Social robots are robots that are intended for social interaction with people. Because of the societal benefits that they are expected to bring, social robots are likely to become more common. Notably, social robots may be able to perform tasks that require social skills, such as communicating efficiently, coordinating actions, managing relationships, and building trust and rapport. However, robotic systems currently lack most of the technological preconditions for interacting socially. This means that until the necessary technology is developed, humans will have to do most of the work coordinating social interactions with robots. However, social robots are a phenomenon that might also challenge the human ability to interact socially. In particular, the actions of social robots may be less predictable to the ordinary people who will interact with them than the comparable actions of humans. In anticipating the actions of other people, we commonly employ folk-psychological assumptions about what others are likely to believe, want, and intend to do, given the situation that they are in. Folk psychology allows us to make instantaneous, unconscious judgments about the likely actions of others around us, and therefore, to interact socially. However, the application of folk psychology will be challenged in the context of social interaction with robots because of significant differences between humans and robots.

    This thesis addresses the scope and limits of people's ability to interact socially with robots by treating them as intentional agents, i.e., agents whose behavior is most appropriately predicted by attributing it to underlying intentional states, such as beliefs and desires. The thesis provides an analysis of the problem(s) of attributing behavior-congruent intentional states to robots, with a particular focus on the perceptual belief problem, i.e., the problem of understanding what robots know (and do not know) about objects and events in the environment based on their perception. The thesis presents evidence that people's understanding of robots as intentional agents is important to their ability to interact socially with them but that it may also be significantly limited by (1) the extendability of the rich folk-psychological understanding that people have gained from sociocultural experiences with humans and other social animals to interactions with robots, and (2) the integrability of new experiences with robots into a usable and reasonable accurate folk psychological understanding of them. Studying the formation and application of folk psychology in interactions with robots should therefore be a central undertaking in social robotics research.

    List of papers
    1. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots
    Open this publication in new window or tab >>Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots
    2017 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 8, article id 1962Article in journal (Refereed) Published
    Abstract [en]

    People rely on shared folk-psychological theories when judging behavior. These theories guide peoples social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie peoples judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior -(2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that peoples intentional stance toward the robot was in this case very similar to their stance toward the human.

    Place, publisher, year, edition, pages
    FRONTIERS MEDIA SA, 2017
    Keywords
    human-robot interaction; folk psychology; social interaction; intentional stance; attribution theory; intentionality ascription; behavior explanation; social robots
    National Category
    Social Psychology
    Identifiers
    urn:nbn:se:liu:diva-143236 (URN)10.3389/fpsyg.2017.01962 (DOI)000415036700001 ()
    Note

    Funding Agencies|ELLIIT (Excellence Center at Linkoping-Lund in Information Technology); Knowledge Foundation, Stockholm, under SIDUS grant [20140220]

    Available from: 2017-11-27 Created: 2017-11-27 Last updated: 2022-02-10
    2. Human Interpretation of Goal-Directed Autonomous Car Behavior
    Open this publication in new window or tab >>Human Interpretation of Goal-Directed Autonomous Car Behavior
    2018 (English)In: COGSCI2018 Changing / minds, 40th annual cognitive science society meeting, Madison, Wisconsin, USA, July 25-28, Victoria, British Columbia: Cognitive Science Society , 2018, p. 2235-2240Conference paper, Published paper (Refereed)
    Abstract [en]

    People increasingly interact with different types of autonomous robotic systems, ranging from humanoid social robots to driverless vehicles. But little is known about how people interpret the behavior of such systems, and in particular if and how they attribute cognitive capacities and mental states to them. In a study concerning people’s interpretations of autonomous car behavior, building on our previous research on human-robot interaction, participants were presented with (1) images of cars – either with or without a driver – exhibiting various goal-directed traffic behaviors, and (2) brief verbal descriptions of that behavior. They were asked to rate the extent to which these behaviors were intentional and judge the plausibility of different types of causal explanations. The results indicate that people (a) view autonomous car behavior as goal-directed, (b) discriminate between intentional and unintentional autonomous car behaviors, and (c) view the causes of autonomous and human traffic behaviors similarly, in terms of both intentionality ascriptions and behavior explanations. However, there was considerably lower agreement in participant ratings of the driverless behaviors, which might indicate an increased difficulty in interpreting goal-directed behavior of autonomous systems.

    Place, publisher, year, edition, pages
    Victoria, British Columbia: Cognitive Science Society, 2018
    Keywords
    Autonomous cars, self-driving, human-robot interaction, folk psychology, human-robot interaction, attribution, behavior explanation
    National Category
    Robotics Human Computer Interaction Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-154461 (URN)9780991196784 (ISBN)
    Conference
    The 40th annualcognitive science society meeting, Madison, Wisconsin, USA, July 25-28
    Available from: 2019-02-13 Created: 2019-02-13 Last updated: 2021-08-30Bibliographically approved
    3. The Intentional Stance Toward Robots: Conceptual and Methodological Considerations
    Open this publication in new window or tab >>The Intentional Stance Toward Robots: Conceptual and Methodological Considerations
    2019 (English)In: CogSci'19. Proceedings of the 41st Annual Conference of the Cognitive Science Society / [ed] A.K. Goel, C.M. Seifert, & C. Freksa, Cognitive Science Society, Inc., 2019, p. 1097-1103Conference paper, Published paper (Refereed)
    Abstract [en]

    It is well known that people tend to anthropomorphize in interpretationsand explanations of the behavior of robots and otherinteractive artifacts. Scientific discussions of this phenomenontend to confuse the overlapping notions of folk psychology,theory of mind, and the intentional stance. We provide a clarificationof the terminology, outline different research questions,and propose a methodology for making progress in studyingthe intentional stance toward robots empirically.

    Place, publisher, year, edition, pages
    Cognitive Science Society, Inc., 2019
    Keywords
    human-robot interaction; social cognition; intentional stance; theory of mind; folk psychology; false-belief task
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-159108 (URN)0-9911967-7-5 (ISBN)
    Conference
    The 41st Annual Conference of the Cognitive Science Society, July 24-26, Montreal, Canada
    Note

    The authors would like to thank Fredrik Stjernberg, RobertJohansson, and members of the Cognition & Interaction Labat Linköping University for valuable input on the ideas presentedin this paper.

    Available from: 2019-07-25 Created: 2019-07-25 Last updated: 2021-08-30
    4. The Perceptual Belief Problem: Why Explainability Is a Tough Challenge in Social Robotics
    Open this publication in new window or tab >>The Perceptual Belief Problem: Why Explainability Is a Tough Challenge in Social Robotics
    2021 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 10, no 3Article in journal (Refereed) Published
    Abstract [en]

    The explainability of robotic systems depends on people’s ability to reliably attribute perceptual beliefs to robots, i.e., what robots know (or believe) about objects and events in the world based on their perception. However, the perceptual systems of robots are not necessarily well understood by the majority of people interacting with them. In this article, we explain why this is a significant, difficult, and unique problem in social robotics. The inability to judge what a robot knows (and does not know) about the physical environment it shares with people gives rise to a host of communicative and interactive issues, including difficulties to communicate about objects or adapt to events in the environment. The challenge faced by social robotics researchers or designers who want to facilitate appropriate attributions of perceptual beliefs to robots is to shape human–robot interactions so that people understand what robots know about objects and events in the environment. To meet this challenge, we argue, it is necessary to advance our knowledge of when and why people form incorrect or inadequate mental models of robots’ perceptual and cognitive mechanisms. We outline a general approach to studying this empirically and discuss potential solutions to the problem.

    Place, publisher, year, edition, pages
    Association for Computing Machinery (ACM), 2021
    Keywords
    predictability, belief attribution, social robotics, intentional stance, explainability, Human-robot interaction, understandability, mental state attribution, common ground, intentionality
    National Category
    Human Computer Interaction
    Identifiers
    urn:nbn:se:liu:diva-178651 (URN)10.1145/3461781 (DOI)000731456900011 ()2-s2.0-85111601983 (Scopus ID)
    Available from: 2021-08-25 Created: 2021-08-25 Last updated: 2023-01-09Bibliographically approved
    5. Do You See what I See? Tracking the Perceptual Beliefs of Robots
    Open this publication in new window or tab >>Do You See what I See? Tracking the Perceptual Beliefs of Robots
    2020 (English)In: iScience, E-ISSN 2589-0042 , Vol. 23, no 10, article id 101625Article in journal (Refereed) Published
    Abstract [en]

    Keeping track of others perceptual beliefs-what they perceive and know about the current situation-is imperative in many social contexts. In a series of experiments, we set out to investigate peoples ability to keep track of what robots knowor believe about objects and events in the environment. To this end, we subjected 155 experimental participants to an anticipatory-looking false-belief task where they had to reason about a robots perceptual capability in order to predict its behavior. We conclude that (1) it is difficult for people to track the perceptual beliefs of a robot whose perceptual capability potentially differs significantly from human perception, (2) people can gradually "tune in" to the unique perceptual capabilities of a robot over time by observing it interact with the environment, and (3) providing people with verbal information about a robots perceptual capability might not significantly help them predict its behavior.

    Place, publisher, year, edition, pages
    CELL PRESS, 2020
    National Category
    Robotics
    Identifiers
    urn:nbn:se:liu:diva-172328 (URN)10.1016/j.isci.2020.101625 (DOI)000581985500080 ()33089112 (PubMedID)
    Note

    Funding Agencies|ELLIIT, the Excellence Center at Linkoping-Lund in Information Technology

    Available from: 2021-01-07 Created: 2021-01-07 Last updated: 2021-08-30
    6. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings
    Open this publication in new window or tab >>Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings
    (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a computer < robot < human pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are apparent contradictions in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.  

    Keywords
    human-robot interaction, folk psychology, mentalizing, theory of mind, intentional stance, mind perception, anthropomorphism
    National Category
    Human Computer Interaction
    Identifiers
    urn:nbn:se:liu:diva-178805 (URN)
    Available from: 2021-08-30 Created: 2021-08-30 Last updated: 2021-09-09Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 11.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    The Perceptual Belief Problem: Why Explainability Is a Tough Challenge in Social Robotics2021In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 10, no 3Article in journal (Refereed)
    Abstract [en]

    The explainability of robotic systems depends on people’s ability to reliably attribute perceptual beliefs to robots, i.e., what robots know (or believe) about objects and events in the world based on their perception. However, the perceptual systems of robots are not necessarily well understood by the majority of people interacting with them. In this article, we explain why this is a significant, difficult, and unique problem in social robotics. The inability to judge what a robot knows (and does not know) about the physical environment it shares with people gives rise to a host of communicative and interactive issues, including difficulties to communicate about objects or adapt to events in the environment. The challenge faced by social robotics researchers or designers who want to facilitate appropriate attributions of perceptual beliefs to robots is to shape human–robot interactions so that people understand what robots know about objects and events in the environment. To meet this challenge, we argue, it is necessary to advance our knowledge of when and why people form incorrect or inadequate mental models of robots’ perceptual and cognitive mechanisms. We outline a general approach to studying this empirically and discuss potential solutions to the problem.

  • 12.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Giagtzidou, Asenia
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Arts and Sciences.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    An Implicit, Non-Verbal Measure of Belief Attribution to Robots2020In: HRI20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, ASSOC COMPUTING MACHINERY , 2020, p. 473-475Conference paper (Refereed)
    Abstract [en]

    Studies of mental state attribution to robots usually rely on verbal measures. However, verbal measures are sensitive to peoples rationalizations, and the outcomes of such measures are not always reflected in a persons behavior. In light of these limitations, we present the first steps toward developing an alternative, non-verbal measure of belief attribution to robots. We report preliminary findings from a comparative study indicating that the two types of measures (verbal vs. non-verbal) are not always consistent. Notably, the divergence between the two measures was larger when the task of inferring the robots belief was more difficult.

  • 13.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Anthropocentric Attribution Bias in Human Prediction of Robot Behavior2020In: HRI20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, ASSOC COMPUTING MACHINERY , 2020, p. 476-478Conference paper (Refereed)
    Abstract [en]

    In many types of human-robot interactions, people must track the beliefs of robots based on uncertain estimates of robots perceptual and cognitive capabilities. Did the robot see what happened and did it understand what it saw? In this paper, we present preliminary experimental evidence that people estimating what a humanoid robot knows or believes about the environment anthropocentrically assume it to have human-like perceptual and cognitive capabilities. However, our results also suggest that people are able to adjust their incorrect assumptions based on observations of the robot.

  • 14.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Do You See what I See? Tracking the Perceptual Beliefs of Robots2020In: iScience, E-ISSN 2589-0042 , Vol. 23, no 10, article id 101625Article in journal (Refereed)
    Abstract [en]

    Keeping track of others perceptual beliefs-what they perceive and know about the current situation-is imperative in many social contexts. In a series of experiments, we set out to investigate peoples ability to keep track of what robots knowor believe about objects and events in the environment. To this end, we subjected 155 experimental participants to an anticipatory-looking false-belief task where they had to reason about a robots perceptual capability in order to predict its behavior. We conclude that (1) it is difficult for people to track the perceptual beliefs of a robot whose perceptual capability potentially differs significantly from human perception, (2) people can gradually "tune in" to the unique perceptual capabilities of a robot over time by observing it interact with the environment, and (3) providing people with verbal information about a robots perceptual capability might not significantly help them predict its behavior.

    Download full text (pdf)
    fulltext
  • 15.
    Rueben, Matthew
    et al.
    Univ Southern Calif, CA 90007 USA.
    Nikolaidis, Stefanos
    Univ Southern Calif, CA 90007 USA.
    de Graaf, Maartje
    Univ Utrecht, Netherlands.
    Phillips, Elizabeth
    US Air Force Acad, CO 80840 USA.
    Robert, Lionel
    Univ Michigan, MI 48109 USA.
    Sirkin, David
    Stanford Univ, CA 94305 USA.
    Kwon, Minae
    Stanford Univ, CA 94305 USA.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Half Day Workshop on Mental Models of Robots2020In: HRI20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, ASSOC COMPUTING MACHINERY , 2020, p. 658-659Conference paper (Refereed)
    Abstract [en]

    Robotic systems are becoming increasingly complex, hindering people from understanding the robots inner workings [24]. Simply providing the robots source code may be useful for software and hardware engineers who need to test the system for traceability and verification [3], but not for the non-technical user. Plus, looks can be deceiving: robots that merely resemble humans or animals are perceived differently by users [25]. This workshop aims to provide a forum for researchers from both industry and academia to discuss the users understanding or mental model of a robot: what the robot is, what it does, and how it works. In many cases it will be useful for robots to estimate each users mental model and use this information when deciding how to behave during an interaction. Designing more transparent robot actions will also be important, giving users a window into what the robot is "thinking", "feeling", and "intending". We envision a future in which robots can automatically detect and correct inaccurate mental models held by users. This workshop will develop a multidisciplinary vision for the next few years of research in pursuit of that future.

  • 16.
    Blomkvist, Johan
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Overkamp, Timothy
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Holmlid, Stefan
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Robots in Service Design: Consideringuncertainty in social interaction withrobots2020In: ServDes.2020 Tensions, Paradoxes and Plurality Conference Proceedings / [ed] Yoko Akama, Liam Fennessy, Sara Harrington, Anna Farago, Linköping University Electronic Press, 2020, p. 56-57Conference paper (Other academic)
    Abstract [en]

    As robots become more prevalent in society, they will also become part of service systems, and will be among the materials that designers work with. The body of literature on robots in service systems is scarce, in service research as well as in service design research, especially regarding how to understand robots in service, and how design for service is impacted. In this conceptual paper we aim to shed light on how social robots will affect service. We take a look at the current state of robots’ ability to interact socially with people and highlight some of the issues that need to be considered when including social robots as part of service.

    In navigating the social world, people exhibit an intentional stance, in which they rely on assumptions that social behaviour is governed by underlying mental states, such as beliefs and desires. Due to fundamental differences between humans and robots, people’s attribution of the mental state of robots, such as what a particular robot knows and believes, is often precarious and leads to uncertainty in interactions, partly relating to issues with common ground. Additionally, people might hesitate to initiate interactions with robots, based on considerations of privacy and trust, or due to negative attitudes towards them. Designing for service systems where e.g. a robot is being introduced, requires knowledge and understanding of these issues from a design perspective. Service designers therefore need to consider not only the technical aspects of robots, but the specific issues that arise in interactions because of them.

  • 17.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Some Adults Fail the False-Belief Task When the Believer Is a Robot2020In: HRI20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, ASSOC COMPUTING MACHINERY , 2020, p. 479-481Conference paper (Refereed)
    Abstract [en]

    Peoples mental models of robots affect their predictions of robot behavior in interactions. The present study highlights some of the uncertainties that enter specifically into peoples considerations about the minds and behavior of robots by exploring how people fare in the standard "Sally-Anne" false-belief task from developmental psychology when the protagonist is a robot.

  • 18.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    The Intentional Stance Toward Robots: Conceptual and Methodological Considerations2019In: CogSci'19. Proceedings of the 41st Annual Conference of the Cognitive Science Society / [ed] A.K. Goel, C.M. Seifert, & C. Freksa, Cognitive Science Society, Inc., 2019, p. 1097-1103Conference paper (Refereed)
    Abstract [en]

    It is well known that people tend to anthropomorphize in interpretationsand explanations of the behavior of robots and otherinteractive artifacts. Scientific discussions of this phenomenontend to confuse the overlapping notions of folk psychology,theory of mind, and the intentional stance. We provide a clarificationof the terminology, outline different research questions,and propose a methodology for making progress in studyingthe intentional stance toward robots empirically.

    Download full text (pdf)
    fulltext
  • 19.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences. Linköping University, Faculty of Science & Engineering.
    Hagman, William
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Jonsson, Emma
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Nilsson, Lisa
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Samuelsson, Emma
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Simonsson, Charlie
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Skönvall, Julia
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Westin, Anna
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    He is not more persuasive than her: No gender biases toward robots giving speeches2018In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, New York, NY, USA: ACM Digital Library, 2018, p. 327-328Conference paper (Refereed)
    Abstract [en]

    The reported study investigated three gender-related effects on the rated persuasiveness of a speech given by a humanoid robot: (1) the female or male gendered voice and visual appearance of the robot, (2) the female or male gender of the participant, and (3) the interaction between robot gender and participant gender. The study employed a measure of persuasiveness based on the Aristotelian modes of persuasion: ethos, pathos and logos. In contrast to previous studies on gender bias toward intelligent virtual agents and robots, the gender of the robot did not influence the rated persuasiveness of the speech, and female participants rated the speech as more persuasive than men overall.

  • 20.
    Petrovych, Veronika
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences. Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Human Interpretation of Goal-Directed Autonomous Car Behavior2018In: COGSCI2018 Changing / minds, 40th annual cognitive science society meeting, Madison, Wisconsin, USA, July 25-28, Victoria, British Columbia: Cognitive Science Society , 2018, p. 2235-2240Conference paper (Refereed)
    Abstract [en]

    People increasingly interact with different types of autonomous robotic systems, ranging from humanoid social robots to driverless vehicles. But little is known about how people interpret the behavior of such systems, and in particular if and how they attribute cognitive capacities and mental states to them. In a study concerning people’s interpretations of autonomous car behavior, building on our previous research on human-robot interaction, participants were presented with (1) images of cars – either with or without a driver – exhibiting various goal-directed traffic behaviors, and (2) brief verbal descriptions of that behavior. They were asked to rate the extent to which these behaviors were intentional and judge the plausibility of different types of causal explanations. The results indicate that people (a) view autonomous car behavior as goal-directed, (b) discriminate between intentional and unintentional autonomous car behaviors, and (c) view the causes of autonomous and human traffic behaviors similarly, in terms of both intentionality ascriptions and behavior explanations. However, there was considerably lower agreement in participant ratings of the driverless behaviors, which might indicate an increased difficulty in interpreting goal-directed behavior of autonomous systems.

    Download full text (pdf)
    Human Interpretation of Goal-Directed Autonomous Car Behavior
  • 21.
    Löfgren, Fredrik
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Thunberg, Sofia
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    LetterMoose: A Handwriting Tutor Robot2018Conference paper (Refereed)
    Abstract [en]

    We present a simple robotic tutor designed to help raise handwriting competency in school-aged children. "LetterMoose" shows the steps in how a letter is formed by writing on regular piece of paper. The child is invited to imitate LetterMoose and to scan its own letters using LetterMoose in order to get evaluative feedback (both qualitative and quantitative). We propose that LetterMoose might be particularly useful for helping children with autism attain handwriting competency, as children in this group are more likely to suffer from writing difficulties and may uniquely benefit from interacting with robot technology.

  • 22.
    Thunberg, Sofia
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Thellman, Sam
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Don't Judge a Book by its Cover: A Study of the Social Acceptance of NAO vs. Pepper2017Conference paper (Refereed)
    Abstract [en]

    In an explorative study concerning the social acceptance of two specific humanoid robots, the experimenter asked participants (N = 36) to place a book in an adjacent room. Upon entering the room, participants were confronted by a NAO or a Pepper robot expressing persistent opposition against the idea of placing the book in the room. On average, 72% of participants facing NAO complied with the robot's requests and returned the book to the experimenter. The corresponding figure for the Pepper robot was 50%, which shows that the two robot morphologies had a different effect on participants' social behavior. Furthermore, results from a post-study questionnaire (GODSPEED) indicated that participants perceived NAO as more likable, intelligent, safe and lifelike than Pepper. Moreover, participants used significantly more positive words and fewer negative words to describe NAO than Pepper in an open-ended interview. There was no statistically significant difference between conditions in participants' negative attitudes toward robots in general, as assessed using the NARS questionnaire.

  • 23.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering. University of Skovde, Sweden.
    Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots2017In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 8, article id 1962Article in journal (Refereed)
    Abstract [en]

    People rely on shared folk-psychological theories when judging behavior. These theories guide peoples social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie peoples judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior -(2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that peoples intentional stance toward the robot was in this case very similar to their stance toward the human.

    Download full text (pdf)
    fulltext
  • 24.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences. Linköping University, Faculty of Science & Engineering.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Lay causal explanations of human vs. humanoid behavior2017In: Proceedings of the 17th International Conference on Intelligent Virtual Agents, Cham: Springer, 2017, p. 433-436Conference paper (Refereed)
    Abstract [en]

    The present study used a questionnaire-based method for investigating people's interpretations of behavior exhibited by a person and a humanoid robot, respectively. Participants were given images and verbal descriptions of different behaviors and were asked to judge the plausibility of seven causal explanation types. Results indicate that human and robot behavior are explained similarly, but with some significant differences, and with less agreement in the robot case.

  • 25.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Social attitudes toward robots are easily manipulated2017In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: ACM Digital Library, 2017, p. 299-300Conference paper (Refereed)
    Abstract [en]

    Participants in a study concerning social attitudes toward robots were randomly assigned a questionnaire form displaying a non-, semi- or highly anthropomorphic robot as a hidden intervention. Results indicate that asking people about their attitudes toward "robots" in general -- as done in some studies -- is questionable, given that (a) outcomes can vary significantly depending on the type of robot they have in mind, and (b) it is therefore easy to intentionally or unintentionally manipulate results by priming respondents with positive or negative examples.

  • 26.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Lundberg, Jacob
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Arvola, Mattias
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    What Is It Like to Be a Bot?: Toward More Immediate Wizard-of-Oz Control in Social Human–Robot Interaction2017In: HAI 2017 Proceedings of the 5th International Conference on Human Agent Interaction, New York, NY.: ACM Press, 2017, p. 435-438Conference paper (Refereed)
    Abstract [en]

    Several Wizard-of-Oz techniques have been developed to make robots appear autonomous and more social in human-robot interaction. Many of the existing solutions use control interfaces that introduce significant time delays and hamper the robot operator's ability to produce socially appropriate responses in real time interactions. We present work in progress on a novel wizard control interface designed to overcome these limitations:a motion tracking-based system which allows the wizard to act as if he or she is the robot. The wizard sees the other through the robot's perspective, and uses his or her own bodily movements to control it. We discuss potential applications and extensions of this system, and conclude by discussing possible methodological advantages and disadvantages.

    Download full text (pdf)
    fulltext
  • 27.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Silvervarg, Annika
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Gulz, Agneta
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Physical vs. Virtual Agent Embodiment and Effects on Social Interaction2016In: Intelligent Virtual Agents: 16th International Conference, IVA 2016, Los Angeles, CA, USA, September 20–23, 2016, Proceedings / [ed] David Traum, William Swartout, Peter Khooshabeh, Stefan Kopp, Stefan Scherer, Anton Leuski, Cham: Springer, 2016, Vol. 10011, p. 412-415Conference paper (Refereed)
    Abstract [en]

    Previous work indicates that physical robots elicit more favorable social responses than virtual agents. These effects have been attributed to the physical embodiment. However, a recent meta-analysis by Li [1] suggests that the benefits of robots are due to physical presence rather than physical embodiment. To further explore the importance of presence we conducted a pilot study investigating the relationship between physical and social presence. The results suggest that social presence of an artificial agent is important for interaction with people, and that the extent to which it is perceived as socially present might be unaffected by whether it is physically or virtually present.

    Download full text (pdf)
    fulltext
  • 28.
    Thellman, Sam
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Arts and Sciences.
    de Graaf, Maartje
    Utrecht University, Utrecht, Netherlands.
    Ziemke, Tom
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and FindingsManuscript (preprint) (Other academic)
    Abstract [en]

    The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a computer < robot < human pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are apparent contradictions in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.  

1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf