liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Can You Read My Mind?: A Participatory Design Study of How a Humanoid Robot Can Communicate Its Intent and Awareness
Linköping University, Department of Computer and Information Science, Human-Centered systems.
2019 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Communication between humans and interactive robots will benefit if people have a clear mental model of the robots' intent and awareness. The aim with this thesis was to investigate how human-robot interaction is affected by manipulation of social cues on the robot. The research questions were: How do social cues affect mental models of the Pepper robot, and how can a participatory design method be used for investigating how the Pepper robot could communicate intent and awareness? The hypothesis for the second question was that nonverbal cues would be preferred over verbal cues. An existing standard platform was used, Softbank's Pepper, as well as state-of-the-art tasks from the RoboCup@Home challenge. The rule book and observations from the 2018 competition were thematically coded and the themes created eight scenarios. A participatory design method called PICTIVE was used in a design study, where five student participants went through three phases, label, sketch and interview, to create a design for how the robot should communicate intent and awareness. The use of PICTIVE was a suitable way to extract a lot of design ideas. However, not all scenarios were optimal for the task. The design study confirmed the use of mediating physical attributes to alter the mental model of a humanoid robot to reach common ground. Further, it did not confirm the hypothesis that nonverbal cues would be preferred over verbal cues, though it did show that verbal cues would not be enough. This, however, needs to be further tested in live interactions.

Place, publisher, year, edition, pages
2019. , p. 67
Keywords [en]
human-robot interaction, social interaction, awareness, intention, hri, humanoid, robot interaction, robot, robocup, participatory design, pepper, theory of mind, common ground
Keywords [sv]
människa-robot interaktion, social interaktion, medvetenhet, intentioner, robotinteraktion
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:liu:diva-158033ISRN: LIU-IDA/KOGVET-A--19/006--SEOAI: oai:DiVA.org:liu-158033DiVA, id: diva2:1329112
Subject / course
Cognitive science
Presentation
2019-06-12, Linköping, 10:00 (Swedish)
Examiners
Available from: 2019-06-25 Created: 2019-06-24 Last updated: 2019-06-25Bibliographically approved

Open Access in DiVA

fulltext(2973 kB)395 downloads
File information
File name FULLTEXT01.pdfFile size 2973 kBChecksum SHA-512
defdace78e051edaa262063a00ebd075d8bc6afa318d5bba98033fbf57f85c076399de3a92b13b66a2ce6d5acaffbf52394e9857ebe9cd48b909b740cb9165a2
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Thunberg, Sofia
By organisation
Human-Centered systems
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 399 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 639 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf