liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Qvarfordt, Pernilla
Publications (10 of 12) Show all publications
Qvarfordt, P. & Zhai, S. (2005). Conversing with the User Based on Eye-Gaze Patterns. In: Conference on Human-Factors in Computing Systems CHI2005,2005 (pp. 221). Portland, U.S.A.: ACM Press
Open this publication in new window or tab >>Conversing with the User Based on Eye-Gaze Patterns
2005 (English)In: Conference on Human-Factors in Computing Systems CHI2005,2005, Portland, U.S.A.: ACM Press , 2005, p. 221-Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Portland, U.S.A.: ACM Press, 2005
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-30258 (URN)15768 (Local ID)15768 (Archive number)15768 (OAI)
Available from: 2009-10-09 Created: 2009-10-09 Last updated: 2018-01-13
Qvarfordt, P., Beymer, D. & Zhai, S. (2005). RealTourist - A Study of Augmenting Human-Human and Human-Computer Dialogue with Eye-Gaze Overlay. In: INTERACT,2005.
Open this publication in new window or tab >>RealTourist - A Study of Augmenting Human-Human and Human-Computer Dialogue with Eye-Gaze Overlay
2005 (English)In: INTERACT,2005, 2005Conference paper, Published paper (Refereed)
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-30256 (URN)15765 (Local ID)15765 (Archive number)15765 (OAI)
Available from: 2009-10-09 Created: 2009-10-09 Last updated: 2018-01-13
Qvarfordt, P., Beymer, D. & Zhai, S. (2005). RealTourist - A study of augmenting human-human and human-computer dialogue with eye-gaze overlay. In: Maria Francesca Costabile and Fabio Paternò (Ed.), Human-Computer Interaction - INTERACT 2005: IFIP TC13 International Conference, Rome, Italy, September 12-16, 2005. Proceedings (pp. 767-780). Springer Berlin/Heidelberg, 3585
Open this publication in new window or tab >>RealTourist - A study of augmenting human-human and human-computer dialogue with eye-gaze overlay
2005 (English)In: Human-Computer Interaction - INTERACT 2005: IFIP TC13 International Conference, Rome, Italy, September 12-16, 2005. Proceedings / [ed] Maria Francesca Costabile and Fabio Paternò, Springer Berlin/Heidelberg, 2005, Vol. 3585, p. 767-780Chapter in book (Refereed)
Abstract [en]

We developed and studied an experimental system, RealTourist, which lets a user to plan a conference trip with the help of a remote tourist consultant who could view the tourist's eye-gaze superimposed onto a shared map. Data collected from the experiment were analyzed in conjunction with literature review on speech and eye-gaze patterns. This inspective, exploratory research identified various functions of gaze-overlay on shared spatial material including: accurate and direct display of partner's eye-gaze, implicit deictic referencing, interest detection, common focus and topic switching, increased redundancy and ambiguity reduction, and an increase of assurance, confidence, and understanding. This study serves two purposes. The first is to identify patterns that can serve as a basis for designing multimodal human-computer dialogue systems with eye-gaze locus as a contributing channel. The second is to investigate how computer-mediated communication can be supported by the display of the partner's eye-gaze.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2005
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 3585
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 3585
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-48136 (URN)10.1007/11555261_61 (DOI)978-3-540-28943-2 (ISBN)978-3-540-31722-7 (ISBN)3-540-28943-7 (ISBN)
Available from: 2009-10-11 Created: 2009-10-11 Last updated: 2018-02-19Bibliographically approved
Qvarfordt, P. (2004). Eyes on multimodal interaction. (Doctoral dissertation). Linköping: Linköping University
Open this publication in new window or tab >>Eyes on multimodal interaction
2004 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Advances in technology are making it possible for users to interact with computers by various modalities, often through speech and gesture. Such multimodal interaction is attractive because it mimics the patterns and skills in natural human-human communication. To date, research in this area has primarily focused on giving commands to computers. The focus of this thesis shifts from commands to dialogue interaction. The work presented here is divided into two parts. The first part looks at the impact of the characteristics of the spoken feedback on users' experience of a multimodal dialogue system. The second part investigates whether and how eye-gaze can be utilized in multimodal dialogue systems.

Although multimodal interaction has attracted many researchers, little attention has been paid to how users experience such systems. The first part of this thesis investigates what makes multimodal dialogue systems either human-like or tool-like, and what qualities are most important to users. In a simulated multimodal timetable information system users were exposed to different levels of spoken feedback. The results showed that the users preferred the system to be either clearly tool-like, with no spoken words, or clearly human-like, with complete and natural utterances. Furthermore, the users' preference for a human-like multimodal system tended to be much higher after they had actual experience than beforehand based on imagination.

Eye-gaze plays a powerful role in human communication. In a computer-mediated collaborative task involving a tourist and a tourist consultant, the second part of this thesis starts with examining the users' eye-gaze patterns and their functions in deictic referencing, interest detection, topic switching, ambiguity reduction, and establishing common ground in a dialogue. Based on the results of this study, an interactive tourist advisor system that encapsulates some of the identified patterns and regularities was developed. In a "stress test" experiment based on eye-gaze patterns only, the developed system conversed with users to help them plan their conference trips. Results demonstrated thateye-gaze can play an assistive role in managing future multimodal human-computer dialogues.

Place, publisher, year, edition, pages
Linköping: Linköping University, 2004. p. 226
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 893
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-24226 (URN)3823 (Local ID)91-85295-30-2 (ISBN)3823 (Archive number)3823 (OAI)
Public defence
2004-11-19, Visionen, Hus B, Linköpings Universitet, Linköping, 13:15 (Swedish)
Available from: 2009-10-07 Created: 2009-10-07 Last updated: 2018-01-13
Qvarfordt, P. (2003). Spoken feedback in multimodal interaction: effects on user experience of qualities of interaction. In: In P. Paggio, K. Jokinen, and A. Jönsson (Eds.) Proceedings of the 1st Nordic Symposium on Multimodal Communication, CST Working papers, Report no. 6., September 2003. (pp. 21-34).
Open this publication in new window or tab >>Spoken feedback in multimodal interaction: effects on user experience of qualities of interaction
2003 (English)In: In P. Paggio, K. Jokinen, and A. Jönsson (Eds.) Proceedings of the 1st Nordic Symposium on Multimodal Communication, CST Working papers, Report no. 6., September 2003., 2003, p. 21-34Conference paper, Published paper (Refereed)
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-62054 (URN)
Available from: 2010-11-18 Created: 2010-11-18 Last updated: 2018-01-12
Qvarfordt, P., Jönsson, A. & Dahlbäck, N. (2003). The Role of Spoken Feedback in Experiencing Multimodal Interfaces as Human-like. In: Proceedings of ICMI'03, Vancouver, Canada, 2003. (pp. 250-257).
Open this publication in new window or tab >>The Role of Spoken Feedback in Experiencing Multimodal Interfaces as Human-like
2003 (English)In: Proceedings of ICMI'03, Vancouver, Canada, 2003., 2003, p. 250-257Conference paper, Published paper (Refereed)
Abstract [en]

If user interfaces should be made human-like vs. tool-like has been debated in the HCI field, and this debate affects the development of multimodal interfaces. However, little empirical study has been done to support either view so far. Even if there is evidence that humans interpret media as other humans, this does not mean that humans experience the interfaces as human-like. We studied how people experience a multimodal timetable system with varying degree of human-like spoken feedback in a Wizard-of-Oz study. The results showed that users' views and preferences lean significantly towards anthropomorphism after actually experiencing the multimodal timetable system. The more human-like the spoken feedback is the more participants preferred the system to be human-like. The results also showed that the users experience matched their preferences. This shows that in order to appreciate a human-like interface, the users have to experience it.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-60104 (URN)10.1145/958432.958478 (DOI)1-58113-621-8 (ISBN)
Available from: 2010-10-05 Created: 2010-10-05 Last updated: 2014-01-13
Qvarfordt, P. (2003). User experience of spoken feedback in multimodal interaction. (Licentiate dissertation). Linköping: Linköpings universitet
Open this publication in new window or tab >>User experience of spoken feedback in multimodal interaction
2003 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

The area of multimodal interaction is fast growing, and is showing promising results in making the interaction more efficient and Robust. These results are mainly based on better recognizers, and studies of how users interact with particular multimodal systems. However, little research has been done on users- subjective experience of using multimodal interfaces, which is an important aspect for acceptance of multimodal interfaces. The work presented in this thesis focuses on how users experience multimodal interaction, and what qualities are important for the interaction. Traditional user interfaces and speech and multimodal interfaces are often described as having different interaction character (handlingskaraktär). Traditional user interfaces are often seen as tools, while speech and multimodal interfaces are often described as dialogue partners. Researchers have ascribed different qualities as important for performance and satisfaction for these two interaction characters. These statements are examined by studying how users react to a multimodal timetable system. In this study spoken feedback was used to make the interaction more human-like. A Wizard-of-Oz method was used to simulate the recognition and generation engines in the timetable system for public transportation. The results from the study showed that users experience the system having an interaction character, and that spoken feedback influences that experience. The more spoken feedback the system gives, the more users will experience the system as a dialogue partner. The evaluation of the qualities of interaction showed that user preferred no spoken feedback, or elaborated spoken feedback. Limited spoken feedback only distracted the users. 

Place, publisher, year, edition, pages
Linköping: Linköpings universitet, 2003. p. 200
Series
Linköping Studies in Science and Technology. Thesis, ISSN 0280-7971 ; 1003
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-42654 (URN)67663 (Local ID)91-7373-606-6 (ISBN)67663 (Archive number)67663 (OAI)
Presentation
2003-03-27, Visionen, Linköpings universitet, Linköping, 18:00 (Swedish)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-12
Lundberg, J., Ibrahim, A., Jönsson, D., Lindquist, S. & Qvarfordt, P. (2002). "The snatcher catcher" - an interactive refrigerator. In: Proceedings of the second Nordic conference on Human-computer interaction (pp. 209-211).
Open this publication in new window or tab >>"The snatcher catcher" - an interactive refrigerator
Show others...
2002 (English)In: Proceedings of the second Nordic conference on Human-computer interaction, 2002, p. 209-211Conference paper, Published paper (Other academic)
Abstract [en]

In order to provoke a debate about the use of new technology, the Snatcher Catcher, an intrusive interactive refrigerator that keeps record of the items in it, was created. In this paper we present the fridge, and how we used it in a provocative installation. The results showed that the audience was provoked, and that few people wanted to have the fridge in their surroundings.

Keywords
Interactive, intrusion, provocation, everyday computing
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-61609 (URN)
Available from: 2010-11-17 Created: 2010-11-17 Last updated: 2013-09-13
Qvarfordt, P. & Santamarta, L. (2000). First-Personness Approach to Co-operative Multimodal Interaction. In: Tieniu Tan, Yuanchun Shi and Wen Gao (Ed.), Tan, Tieniu, Shi, Yuanchun, Gao, Wen (Ed.), Advances in Multimodal Interfaces — ICMI 2000: Third International Conference Beijing, China, October 14–16, 2000 Proceedings. Paper presented at Third International Conference Beijing, China, October 14–16, 2000 (pp. 650-657). Springer Berlin/Heidelberg, 1948
Open this publication in new window or tab >>First-Personness Approach to Co-operative Multimodal Interaction
2000 (English)In: Advances in Multimodal Interfaces — ICMI 2000: Third International Conference Beijing, China, October 14–16, 2000 Proceedings / [ed] Tan, Tieniu, Shi, Yuanchun, Gao, Wen, Springer Berlin/Heidelberg, 2000, Vol. 1948, p. 650-657Conference paper, Published paper (Refereed)
Abstract [en]

Using natural language in addition to graphical user interfaces is often used as an argument for a better interaction. However, just adding spoken language might not lead to a better interaction. In this article we will look deeper into how the spoken language should be used in a co-operative multimodal interface. Based on empirical investigations, we have noticed that for multimodal information systems efficiency is especially important. Our results indicate that efficiency can be divided into functional and linguistic efficiency. Functional efficiency has a tight relation to solving the task fast. Linguistic efficiency concerns how to make the contributions meaningful and appropriate in the context. For linguistic efficiency user's perception of first-personness [1] is important, as well as giving users support for understanding the interface, and to adapt the responses to the user. In this article focus is on linguistic efficiency for a multimodal timetable information system.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2000
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 1948
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-48965 (URN)10.1007/3-540-40063-X_84 (DOI)000174117200084 ()978-3-540-41180-2 (ISBN)978-3-540-40063-9 (ISBN)3-540-41180-1 (ISBN)
Conference
Third International Conference Beijing, China, October 14–16, 2000
Available from: 2009-10-11 Created: 2009-10-11 Last updated: 2018-02-12Bibliographically approved
Bäckvall, P., Mårtensson, P. & Qvarfordt, P. (2000). Using Fisheye for Navigation on Small Displays. In: Nordic Conference on Computer-Human Interaction,2000: . Paper presented at First Nordic Conference on Computer-Human Interaction, 23-25 October 2000, Stockholm, Sweden.
Open this publication in new window or tab >>Using Fisheye for Navigation on Small Displays
2000 (English)In: Nordic Conference on Computer-Human Interaction,2000, 2000Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a solution to the problem of visualising large amount of hierarchical information structures on small computer screens. Our solution has been implemented as a prototype for mobile use on a hand-held computer using Microsoft Pocket PC with a screen size of 240x320 pixels. The prototype uses the same information as service engineers use on stationary computers. The visualisation technique we used for displaying information is based on fisheye technique, which we have found functional on small displays. The prototype is domain independent; the information is easily interchangeable. A consequence of the result presented here is that the possibility of using hand-held computers in different types of contexts increases.

National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-31206 (URN)16954 (Local ID)16954 (Archive number)16954 (OAI)
Conference
First Nordic Conference on Computer-Human Interaction, 23-25 October 2000, Stockholm, Sweden
Available from: 2009-10-09 Created: 2009-10-09 Last updated: 2018-01-13
Organisations

Search in DiVA

Show all publications