liu.seSearch for publications in DiVA
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Managing Delays in Human-Robot Interaction
Linköping University, Department of Culture and Society, Division of Language, Culture and Interaction. Linköping University, Faculty of Arts and Sciences.ORCID iD: 0000-0003-0992-5176
Linköping University, Department of Culture and Society, Division of Language, Culture and Interaction. Linköping University, Faculty of Arts and Sciences.ORCID iD: 0000-0003-0451-0254
2023 (English)In: ACM Transactions on Computer-Human Interaction, ISSN 1073-0516, E-ISSN 1557-7325, Vol. 30, no 4, article id 50Article in journal (Refereed) Published
Abstract [en]

Delays in the completion of joint actions are sometimes unavoidable. How should a robot communicate that it cannot immediately act or respond in a collaborative task? Drawing on video recordings of a face scanning activity in family homes, we investigate how humans make sense of a Cozmo robot’s delays on a moment-by-moment basis. Cozmo’s sounds and embodied actions are recognized as indicators of delay but encourage human participants to act in ways that undermine the scanning process. In comparing the robot’s delay management strategies with human-human vocal and embodied practices, we demonstrate key differences in the sequences that impact how the robot is understood. The study demonstrates how delay events are accomplished as embodied displays that are distributed across co-participants. We present a framework for making delay transparent through situated explanations, particularly in the form of non-lexical sounds and bodily actions.

Place, publisher, year, edition, pages
ACM Digital Library, 2023. Vol. 30, no 4, article id 50
Keywords [en]
Conversation analysis, Time delay, Embodiment, Ethnomethodology, System response time, Sound, Engagement
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:liu:diva-193557DOI: 10.1145/3569890ISI: 001067749200001Scopus ID: 2-s2.0-85169759342OAI: oai:DiVA.org:liu-193557DiVA, id: diva2:1754947
Note

Funding: Swedish Research Council [2016-00827]

Available from: 2023-05-05 Created: 2023-05-05 Last updated: 2024-01-10
In thesis
1. Robot Sound in Interaction: Analyzing and Designing Sound for Human-Robot Coordination
Open this publication in new window or tab >>Robot Sound in Interaction: Analyzing and Designing Sound for Human-Robot Coordination
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Robots naturally emit sound, but we still know little about how sound can serve as an interface that makes a robot’s behavior explainable to humans. This dissertation draws on insights about human practices for coordinating bodily activities through sound, investigating how they could inform robot design. My work builds on three video corpora, involving i) a Cozmo robot in ten family homes, ii) autonomous public shuttle buses in an urban environment, and iii) a teamwork robot prototype controlled by a researcher and interacting with study participants in an experimental setting. I approached the data from two methodological angles, exploring how they can speak to each other: I first carried out an empirical analysis of the video data from an Ethnomethodology and Conversation Analysis (EMCA) perspective, focusing on how humans make sense of robot sound on a moment-by-moment basis in naturally occurring interaction. Subsequently, taking an Interaction Design perspective, I used my video recordings as a design material for exploring how robot sound could be designed in and for real-time interaction. My work contributes to Human-Robot Interaction through detailed studies of robots in the world (rather than in the lab), focusing on how participants make sense of robot sounds. I present a novel framework for designing sound in and for interaction and a prototyping practice that allows practitioners to embed an EMCA stance into their designs. The dissertation contributes to EMCA by describing how members embed autonomous machines into the social organization of activities and how humans treat robots as participants in the interaction. I make a contribution to the development of EMCA hybrid studies by seeking a synthesis between EMCA and robot interaction design.

Abstract [sv]

Trots att ljud är en naturlig del av en robots närvaro vet vi fortvarande väldigt lite om hur ljud kan användas i gränssnitt för att göra robotars beteende förståeligt för människor. Denna avhandling utgår från nya insikter om hur människor använder sina röster i kroppsliga aktiviteter, för att undersöka hur denna kunskap kan användas vid gestaltning av robotar. Avhandlingen bygger på tre videokorpusar som visar i) leksaksroboten Cozmo i tio olika barnfamiljers hemmiljö, ii) två autonoma bussar i stadsmiljö och iii) en forskarstyrd prototyp av en robot för grupparbete i en experimentell miljö. Korpusarna studerades utifrån ett etnometodologiskt och interaktionsanalytiskt perspektiv (eng. ethnomethodology and conversation analysis, EMCA). Analysen fokuserade på hur människor visar sin förståelse av robotljud i naturligt förekommande interaktioner. De två sistnämnda korpusarna användes dessutom som material för interaktionsdesign i syfte att utforska hur robotljud kan utformas för att stödja realtidsinteraktion. Arbetet bidrar till fältet människa-robotinteraktion genom att erbjuda detaljerade studier av robotar i världen (i motsats till laboratoriemiljö) med fokus på hur deltagare i sampel med en robot förstår dess ljud. Avhandlingen föreslår ett nytt ramverk för att utforma ljud för interaktionella syften och en metod för att implementera ett EMCA-förhållningssätt inom designpraktiker. Arbetet beskriver även hur autonoma maskiner kan ingå i socialt organiserade aktiviteter och hur robotar kan behandlas som deltagare i interaktion med människor. Slutligen bidrar avhandlingen även till utvecklingen av EMCA-hybridstudier genom att utforska möjligheten att utveckla en EMCA-informerad metod för design av robotinteraktion.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2023. p. 113
Series
Linköping Studies in Arts and Sciences, ISSN 0282-9800 ; 853Studies in Language and Culture, ISSN 1403-2570 ; 36
Keywords
Conversation analysis, Ethnomethodology, Human-robot interaction, Interaction design, Multimodality, Non-lexical sounds, Robot sound, Video analysis, Etnometodologi, Icke-lexikala ljud, Interaktionsanalys, Interaktionsdesign, Multimodalitet, Människa-robotinteraktion, Robotljud, Videoanalys
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:liu:diva-193560 (URN)10.3384/9789180751179 (DOI)9789180751162 (ISBN)9789180751179 (ISBN)
Public defence
2023-06-08, KEY1, Building Key, Campus Valla, Linköping, 10:15 (English)
Opponent
Supervisors
Available from: 2023-05-05 Created: 2023-05-05 Last updated: 2023-05-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Pelikan, HannahHofstetter, Emily

Search in DiVA

By author/editor
Pelikan, HannahHofstetter, Emily
By organisation
Division of Language, Culture and InteractionFaculty of Arts and Sciences
In the same journal
ACM Transactions on Computer-Human Interaction
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 113 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf