liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Bäckvall, P
    et al.
    Mårtensson, P
    Qvarfordt, Pernilla
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory.
    Using Fisheye for Navigation on Small Displays2000In: Nordic Conference on Computer-Human Interaction,2000, 2000Conference paper (Refereed)
    Abstract [en]

    In this paper we present a solution to the problem of visualising large amount of hierarchical information structures on small computer screens. Our solution has been implemented as a prototype for mobile use on a hand-held computer using Microsoft Pocket PC with a screen size of 240x320 pixels. The prototype uses the same information as service engineers use on stationary computers. The visualisation technique we used for displaying information is based on fisheye technique, which we have found functional on small displays. The prototype is domain independent; the information is easily interchangeable. A consequence of the result presented here is that the possibility of using hand-held computers in different types of contexts increases.

  • 2.
    Lundberg, Jonas
    et al.
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Ibrahim, Aseel
    Nokia Home Communications, Linköping.
    Jönsson, David
    Nokia Home Communications, Linköping.
    Lindquist, Sinna
    Centre for User Oriented IT- design, Stockholm.
    Qvarfordt, Pernilla
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    "The snatcher catcher" - an interactive refrigerator2002In: Proceedings of the second Nordic conference on Human-computer interaction, 2002, p. 209-211Conference paper (Other academic)
    Abstract [en]

    In order to provoke a debate about the use of new technology, the Snatcher Catcher, an intrusive interactive refrigerator that keeps record of the items in it, was created. In this paper we present the fridge, and how we used it in a provocative installation. The results showed that the audience was provoked, and that few people wanted to have the fridge in their surroundings.

  • 3.
    Qvarfordt, Pernilla
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Eyes on multimodal interaction2004Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Advances in technology are making it possible for users to interact with computers by various modalities, often through speech and gesture. Such multimodal interaction is attractive because it mimics the patterns and skills in natural human-human communication. To date, research in this area has primarily focused on giving commands to computers. The focus of this thesis shifts from commands to dialogue interaction. The work presented here is divided into two parts. The first part looks at the impact of the characteristics of the spoken feedback on users' experience of a multimodal dialogue system. The second part investigates whether and how eye-gaze can be utilized in multimodal dialogue systems.

    Although multimodal interaction has attracted many researchers, little attention has been paid to how users experience such systems. The first part of this thesis investigates what makes multimodal dialogue systems either human-like or tool-like, and what qualities are most important to users. In a simulated multimodal timetable information system users were exposed to different levels of spoken feedback. The results showed that the users preferred the system to be either clearly tool-like, with no spoken words, or clearly human-like, with complete and natural utterances. Furthermore, the users' preference for a human-like multimodal system tended to be much higher after they had actual experience than beforehand based on imagination.

    Eye-gaze plays a powerful role in human communication. In a computer-mediated collaborative task involving a tourist and a tourist consultant, the second part of this thesis starts with examining the users' eye-gaze patterns and their functions in deictic referencing, interest detection, topic switching, ambiguity reduction, and establishing common ground in a dialogue. Based on the results of this study, an interactive tourist advisor system that encapsulates some of the identified patterns and regularities was developed. In a "stress test" experiment based on eye-gaze patterns only, the developed system conversed with users to help them plan their conference trips. Results demonstrated thateye-gaze can play an assistive role in managing future multimodal human-computer dialogues.

  • 4.
    Qvarfordt, Pernilla
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Spoken feedback in multimodal interaction: effects on user experience of qualities of interaction2003In: In P. Paggio, K. Jokinen, and A. Jönsson (Eds.) Proceedings of the 1st Nordic Symposium on Multimodal Communication, CST Working papers, Report no. 6., September 2003., 2003, p. 21-34Conference paper (Refereed)
  • 5.
    Qvarfordt, Pernilla
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    User experience of spoken feedback in multimodal interaction2003Licentiate thesis, monograph (Other academic)
    Abstract [en]

    The area of multimodal interaction is fast growing, and is showing promising results in making the interaction more efficient and Robust. These results are mainly based on better recognizers, and studies of how users interact with particular multimodal systems. However, little research has been done on users- subjective experience of using multimodal interfaces, which is an important aspect for acceptance of multimodal interfaces. The work presented in this thesis focuses on how users experience multimodal interaction, and what qualities are important for the interaction. Traditional user interfaces and speech and multimodal interfaces are often described as having different interaction character (handlingskaraktär). Traditional user interfaces are often seen as tools, while speech and multimodal interfaces are often described as dialogue partners. Researchers have ascribed different qualities as important for performance and satisfaction for these two interaction characters. These statements are examined by studying how users react to a multimodal timetable system. In this study spoken feedback was used to make the interaction more human-like. A Wizard-of-Oz method was used to simulate the recognition and generation engines in the timetable system for public transportation. The results from the study showed that users experience the system having an interaction character, and that spoken feedback influences that experience. The more spoken feedback the system gives, the more users will experience the system as a dialogue partner. The evaluation of the qualities of interaction showed that user preferred no spoken feedback, or elaborated spoken feedback. Limited spoken feedback only distracted the users. 

  • 6.
    Qvarfordt, Pernilla
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory.
    Beymer, David
    Zhai, Shumin
    RealTourist - A Study of Augmenting Human-Human and Human-Computer Dialogue with Eye-Gaze Overlay2005In: INTERACT,2005, 2005Conference paper (Refereed)
  • 7.
    Qvarfordt, Pernilla
    et al.
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Beymer, David
    IBM Almaden Research Center, San Jose, CA, USA .
    Zhai, Shumin
    IBM Almaden Research Center, San Jose, CA, USA .
    RealTourist - A study of augmenting human-human and human-computer dialogue with eye-gaze overlay2005In: Human-Computer Interaction - INTERACT 2005: IFIP TC13 International Conference, Rome, Italy, September 12-16, 2005. Proceedings / [ed] Maria Francesca Costabile and Fabio Paternò, Springer Berlin/Heidelberg, 2005, Vol. 3585, p. 767-780Chapter in book (Refereed)
    Abstract [en]

    We developed and studied an experimental system, RealTourist, which lets a user to plan a conference trip with the help of a remote tourist consultant who could view the tourist's eye-gaze superimposed onto a shared map. Data collected from the experiment were analyzed in conjunction with literature review on speech and eye-gaze patterns. This inspective, exploratory research identified various functions of gaze-overlay on shared spatial material including: accurate and direct display of partner's eye-gaze, implicit deictic referencing, interest detection, common focus and topic switching, increased redundancy and ambiguity reduction, and an increase of assurance, confidence, and understanding. This study serves two purposes. The first is to identify patterns that can serve as a basis for designing multimodal human-computer dialogue systems with eye-gaze locus as a contributing channel. The second is to investigate how computer-mediated communication can be supported by the display of the partner's eye-gaze.

  • 8.
    Qvarfordt, Pernilla
    et al.
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Jönsson, Arne
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Effects of Using Speech in Timetable Information Systems for Www,1998In: Proceedings of ICSLP'98, Sydney, Australia,, 1998Conference paper (Refereed)
  • 9.
    Qvarfordt, Pernilla
    et al.
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Jönsson, Arne
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Evaluating the Dialogue Component in the Gulan Educational System1999In: Proceedings of Eurospeech'99, Budapest, Hungary, 1999, p. 643-646Conference paper (Refereed)
  • 10.
    Qvarfordt, Pernilla
    et al.
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Jönsson, Arne
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Dahlbäck, Nils
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    The Role of Spoken Feedback in Experiencing Multimodal Interfaces as Human-like2003In: Proceedings of ICMI'03, Vancouver, Canada, 2003., 2003, p. 250-257Conference paper (Refereed)
    Abstract [en]

    If user interfaces should be made human-like vs. tool-like has been debated in the HCI field, and this debate affects the development of multimodal interfaces. However, little empirical study has been done to support either view so far. Even if there is evidence that humans interpret media as other humans, this does not mean that humans experience the interfaces as human-like. We studied how people experience a multimodal timetable system with varying degree of human-like spoken feedback in a Wizard-of-Oz study. The results showed that users' views and preferences lean significantly towards anthropomorphism after actually experiencing the multimodal timetable system. The more human-like the spoken feedback is the more participants preferred the system to be human-like. The results also showed that the users experience matched their preferences. This shows that in order to appreciate a human-like interface, the users have to experience it.

  • 11.
    Qvarfordt, Pernilla
    et al.
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    Santamarta, Lena
    Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory. Linköping University, The Institute of Technology.
    First-Personness Approach to Co-operative Multimodal Interaction2000In: Advances in Multimodal Interfaces — ICMI 2000: Third International Conference Beijing, China, October 14–16, 2000 Proceedings / [ed] Tan, Tieniu, Shi, Yuanchun, Gao, Wen, Springer Berlin/Heidelberg, 2000, Vol. 1948, p. 650-657Conference paper (Refereed)
    Abstract [en]

    Using natural language in addition to graphical user interfaces is often used as an argument for a better interaction. However, just adding spoken language might not lead to a better interaction. In this article we will look deeper into how the spoken language should be used in a co-operative multimodal interface. Based on empirical investigations, we have noticed that for multimodal information systems efficiency is especially important. Our results indicate that efficiency can be divided into functional and linguistic efficiency. Functional efficiency has a tight relation to solving the task fast. Linguistic efficiency concerns how to make the contributions meaningful and appropriate in the context. For linguistic efficiency user's perception of first-personness [1] is important, as well as giving users support for understanding the interface, and to adapt the responses to the user. In this article focus is on linguistic efficiency for a multimodal timetable information system.

  • 12.
    Qvarfordt, Pernilla
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, NLPLAB - Natural Language Processing Laboratory.
    Zhai, Shumin
    Conversing with the User Based on Eye-Gaze Patterns2005In: Conference on Human-Factors in Computing Systems CHI2005,2005, Portland, U.S.A.: ACM Press , 2005, p. 221-Conference paper (Refereed)
1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf