liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Identification of Temporally Varying Areas of Interest in Long-Duration Eye-Tracking Data Sets
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (Information Visualization)
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (Information Visualization)ORCID iD: 0000-0003-4761-8601
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (Information Visualization)
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Centre for Climate Science and Policy Research, CSPR. (Information Visualization)
Show others and affiliations
2019 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, p. 87-97Article in journal (Refereed) Published
Abstract [en]

Eye-tracking has become an invaluable tool for the analysis of working practices in many technological fields of activity. Typically studies focus on short tasks and use static expected areas of interest (AoI) in the display to explore subjects’ behaviour, making the analyst’s task quite straightforward. In long-duration studies, where the observations may last several hours over a complete work session, the AoIs may change over time in response to altering workload, emergencies or other variables making the analysis more difficult. This work puts forward a novel method to automatically identify spatial AoIs changing over time through a combination of clustering and cluster merging in the temporal domain. A visual analysis system based on the proposed methods is also presented. Finally, we illustrate our approach within the domain of air traffic control, a complex task sensitive to prevailing conditions over long durations, though it is applicable to other domains such as monitoring of complex systems. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019. p. 87-97
Keywords [en]
Eye-tracking data, areas of interest, clustering, minimum spanning tree, temporal data, spatio-temporal data
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:liu:diva-152714DOI: 10.1109/TVCG.2018.2865042ISI: 000452640000009PubMedID: 30183636Scopus ID: 2-s2.0-85052788669OAI: oai:DiVA.org:liu-152714DiVA, id: diva2:1263782
Note

Funding agencies: Swedish Research Council [2013-4939]; RESKILL project - Swedish Transport Administration; Swedish Maritime Administration; Swedish Air Navigation Service Provider LFV

Available from: 2018-11-16 Created: 2018-11-16 Last updated: 2019-11-25Bibliographically approved
In thesis
1. Data Abstraction and Pattern Identification in Time-series Data
Open this publication in new window or tab >>Data Abstraction and Pattern Identification in Time-series Data
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Data sources such as simulations, sensor networks across many application domains generate large volumes of time-series data which exhibit characteristics that evolve over time. Visual data analysis methods can help us in exploring and understanding the underlying patterns present in time-series data but, due to their ever-increasing size, the visual data analysis process can become complex. Large data sets can be handled using data abstraction techniques by transforming the raw data into a simpler format while, at the same time, preserving significant features that are important for the user. When dealing with time-series data, abstraction techniques should also take into account the underlying temporal characteristics.  

This thesis focuses on different data abstraction and pattern identification methods particularly in the cases of large 1D time-series and 2D spatio-temporal time-series data which exhibit spatiotemporal discontinuity. Based on the dimensionality and characteristics of the data, this thesis proposes a variety of efficient data-adaptive and user-controlled data abstraction methods that transform the raw data into a symbol sequence. The transformation of raw time-series into a symbol sequence can act as input to different sequence analysis methods from data mining and machine learning communities to identify interesting patterns of user behavior.  

In the case of very long duration 1D time-series, locally adaptive and user-controlled data approximation methods were presented to simplify the data, while at the same time retaining the perceptually important features. The simplified data were converted into a symbol sequence and a sketch-based pattern identification was then used to identify patterns in the symbolic data using regular expression based pattern matching. The method was applied to financial time-series and patterns such as head-and-shoulders, double and triple-top patterns were identified using hand drawn sketches in an interactive manner. Through data smoothing, the data approximation step also enables visualization of inherent patterns in the time-series representation while at the same time retaining perceptually important points.  

Very long duration 2D spatio-temporal eye tracking data sets that exhibit spatio-temporal discontinuity was transformed into symbolic data using scalable clustering and hierarchical cluster merging processes, each of which can be parallelized. The raw data is transformed into a symbol sequence with each symbol representing a region of interest in the eye gaze data. The identified regions of interest can also be displayed in a Space-Time Cube (STC) that captures both the temporal and contextual information. Through interactive filtering, zooming and geometric transformation, the STC representation along with linked views enables interactive data exploration. Using different sequence analysis methods, the symbol sequences are analyzed further to identify temporal patterns in the data set. Data collected from air traffic control officers from the domain of Air traffic control were used as application examples to demonstrate the results.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2019. p. 58
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2030
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-162220 (URN)10.3384/diss.diva-162220 (DOI)9789179299651 (ISBN)
Public defence
2019-12-13, Domteatern, Visualiseringscenter C, Kungsgatan 54, 60233 Norrköping, Norrköping, 09:15 (English)
Opponent
Supervisors
Available from: 2019-11-25 Created: 2019-11-25 Last updated: 2019-11-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records BETA

Vrotsou, KaterinaVitoria, AidaJohansson, JimmyCooper, Matthew

Search in DiVA

By author/editor
Muthumanickam, PrithivirajVrotsou, KaterinaVitoria, AidaJohansson, JimmyCooper, Matthew
By organisation
Media and Information TechnologyFaculty of Science & EngineeringCentre for Climate Science and Policy Research, CSPR
In the same journal
IEEE Transactions on Visualization and Computer Graphics
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 112 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf