liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1 - 15 av 15
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen D.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen
    Human Systems Integration Division NASA Ames Research Center.
    User Boresight Calibration Precision for Large-Format Head-Up Displays2008Ingår i: Proceedings of the 2008 ACM symposium on Virtual reality software and technology, New York, NY, USA: ACM , 2008, s. 141-148Konferensbidrag (Refereegranskat)
    Abstract [en]

    The postural sway in 24 subjects performing a boresight calibration task on a large format head-up display is studied to estimate the impact of human limits on boresight calibration precision and ultimately on static registration errors. The dependent variables, accumulated sway path and omni-directional standard deviation, are analyzed for the calibration exercise and compared against control cases where subjects are quietly standing with eyes open and eyes closed. Findings show that postural stability significantly deteriorates during boresight calibration compared to when the subject is not occupied with a visual task. Analysis over time shows that the calibration error can be reduced by 39% if calibration measurements are recorded in a three second interval at approximately 15 seconds into the calibration session as opposed to an initial reading. Furthermore parameter optimization on experiment data suggests a Weibull distribution as a possible error description and estimation for omni-directional calibration precision. This paper extends previously published preliminary analyses and the conclusions are verified with experiment data that has been corrected for subject inverted pendulum compensatory head rotation by providing a better estimate of the position of the eye. With correction the statistical findings are reinforced.

  • 2.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen D.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen
    Human Systems Integration Division NASA Ames Research Center.
    User Boresight for AR Calibration: A Preliminary Analysis2008Ingår i: IEEE Virtual Reality Conference, 2008. VR '08 / [ed] Ming Lin, Anthony Steed, Carolina Cruz-Neira, Piscataway, NJ, USA: IEEE , 2008, s. 43-46Konferensbidrag (Refereegranskat)
    Abstract [en]

    The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems.

  • 3.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen D.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Visual Alignment Accuracy in Head Mounted Optical See-Through AR Displays: Distribution of Head Orientation Noise2009Ingår i: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009, San Antonio (TX), USA: Human Factors and Ergonomics Society , 2009, s. 2024-2028Konferensbidrag (Refereegranskat)
    Abstract [en]

    The mitigation of registration errors is a central challenge for improving the usability of AugmentedReality systems. While the technical achievements within tracking and display technology continue toimprove the conditions for good registration, little research is directed towards understanding theuser’s visual alignment performance during the calibration process. This paper reports 12 standingsubjects’ visual alignment performance using an optical see-through head mounted display for viewingdirections varied in azimuth (0°, ±30°, ±60°) and elevation (0°, ±10°). Although viewing direction hasa statistically significant effect on the shape of the distribution, the effect is small and negligible forpractical purposes and can be approximated to a circular distribution with a standard deviation of 0.2°for all viewing directions studied in this paper. In addition to quantifying head aiming accuracy with ahead fixed cursor and illustrating the deteriorating accuracy of boresight calibration with increasingviewing direction extremity, the results are applicable for filter design determining the onset and end ofhead rotation.

  • 4.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen D.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Visual Alignment Precision in Optical See - Through AR Displays: Implications for Potential Accuracy2009Ingår i: Proceedings of the ACM/IEEE Virtual Reality International Conference, Association for Computing Machinery (ACM), 2009Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    The quality of visual registration achievable with anoptical see-through head mounted display (HMD)ultimately depends on the user’s targetingprecision. This paper presents design guidelines forcalibration procedures based on measurements ofusers’ head stability during visual alignment withreference targets. Targeting data was collected from12 standing subjects who aligned a head fixedcursor presented in a see-through HMD withbackground targets that varied in azimuth (0°, ±30°,±60°) and elevation (0°, ±10°). Their data showedthat: 1) Both position and orientation data will needto be used to establish calibrations based on nearbyreference targets since eliminating body swayeffects can improve calibration precision by a factorof 16 and eliminate apparent angular anisotropies.2) Compensation for body sway can speed thecalibration by removing the need to wait for thebody sway to abate, and 3) calibration precision canbe less than 2 arcmin even for head directionsrotated up to 60° with respect to the user’s torsoprovided body sway is corrected. Users ofAugmented Reality (AR) applications overlookinglarge distances may avoid the need to correct forbody sway by boresighting on markers at relativelylong distances, >> 10 m. These recommendationscontrast with those for heads up displays using realimages as discussed in previous papers.

  • 5.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center.
    User Boresighting for AR Calibration: A Preliminary Analysis2008Ingår i: Proceedings of the IEEE Virtual Reality Conference 2008, IEEE , 2008, s. 43-46Konferensbidrag (Refereegranskat)
    Abstract [en]

    The precision with which users can maintain boresight alignment between visual targets at different depths is recorded for 24 subjects using two different boresight targets. Subjects' normal head stability is established using their Romberg coefficients. Weibull distributions are used to describe the probabilities of the magnitude of head positional errors and the three dimensional cloud of errors is displayed by orthogonal two dimensional density plots. These data will lead to an understanding of the limits of user introduced calibration error in augmented reality systems.

  • 6.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer.
    Skoglund, Martin
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer.
    Cooper, Matthew
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer.
    Schön, Thomas
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Gustafsson, Fredrik
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer.
    Ellis, Stephen
    NASA Ames Research Center, USA.
    Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise2010Rapport (Övrigt vetenskapligt)
    Abstract [en]

    The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

    Ladda ner fulltext (pdf)
    FULLTEXT01
    Ladda ner fulltext (pdf)
    FULLTEXT03
  • 7.
    Axholt, Magnus
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Skoglund, Martin
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Peterson, Stephen
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Cooper, Matthew
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Schön, Thomas
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Gustafsson, Fredrik
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen
    NASA Ames Research Center, USA.
    Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise2010Ingår i: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, 2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

  • 8. Beställ onlineKöp publikationen >>
    Peterson, Stephen D.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Stereoscopic Label Placement: Reducing Distraction and Ambiguity in Visually Cluttered Displays2009Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    With increasing information density and complexity, computer displays may become visually cluttered, adversely affecting overall usability. Text labels can significantly add to visual clutter in graphical user interfaces, but are generally kept legible through specific label placement algorithms that seek visual separation of labels and other objects in the 2D view plane. This work studies an alternative approach: can overlapping labels be visually segregated by distributing them in stereoscopic depth? The fact that we have two forward-looking eyes yields stereoscopic disparity: each eye has a slightly different perspective on objects in the visual field. Disparity is used for depth perception by the human visual system, and is therefore also provided by stereoscopic 3D displays to produce a sense of depth.

    This work has shown that a stereoscopic label placement algorithm yields user performance comparable with existing algorithms that separate labels in the view plane. At the same time, such stereoscopic label placement is subjectively rated significantly less disturbing than traditional methods. Furthermore, it does not allow for potentially ambiguous spatial relationships between labels and background objects inherent to labels separated in the view plane. These findings are important for display systems where disturbance, distraction and ambiguity of the overlay can negatively impact safety and efficiency of the system, including the reference application of this work: an augmented vision system for Air Traffic Control towers.

    Delarbeten
    1. Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality
    Öppna denna publikation i ny flik eller fönster >>Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality
    2009 (Engelska)Ingår i: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 33, nr 1, s. 23-33Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, label layering, utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of visual overlap is reduced by 4s or 24%. Our data show that the stereoscopically based depth order of the labels must be correlated with the distance order of their corresponding objects, for practical benefits. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting view plane motion, symbology dimming or label size reduction.

    Ort, förlag, år, upplaga, sidor
    Elsevier, 2009
    Nyckelord
    Label placement, User interfaces, Visual clutter, Augmented reality, Air traffic control
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-17515 (URN)10.1016/j.cag.2008.11.006 (DOI)
    Tillgänglig från: 2009-03-27 Skapad: 2009-03-27 Senast uppdaterad: 2018-01-13Bibliografiskt granskad
    2. Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality
    Öppna denna publikation i ny flik eller fönster >>Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality
    2008 (Engelska)Ingår i: 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, 2008. ISMAR 2008. / [ed] Mark A. Livingston, Oliver Bimber, Hideo Saito, Piscataway, NJ, USA: IEEE , 2008, s. 143-152Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    This paper describes a novel technique for segregating overlapping labels in stereoscopic see-through displays. The present study investigates the labeling of far-field objects, with distances ranging 100-120 m. At these distances the stereoscopic disparity difference between objects is below 1 arcmin, so labels rendered at the same distance as their corresponding objects appear as if on a flat layer in the display. This flattening is due to limitations of both display and human visual resolution. By remapping labels to predetermined depth layers on the optical path between the observer and the labeled object, an interlayer disparity ranging from 5 to 20 arcmin can be achieved for 5 overlapping labels. The present study evaluates the impact of such depth separation of superimposed layers, and found that a 5 arcmin interlayer disparity yields a significantly lower response time, over 20% on average, in a visual search task compared to correctly registering labels and objects in depth. Notably the performance does not improve when doubling the interlayer disparity to 10 arcmin and, surprisingly, the performance degrades significantly when again doubling the interlayer disparity to 20 arcmin, approximating the performance in situations with no interlayer disparity. These results confirm that our technique can be used to segregate overlapping labels in the far visual field, without the cost associated with traditional label placement algorithms.

    Ort, förlag, år, upplaga, sidor
    Piscataway, NJ, USA: IEEE, 2008
    Nyckelord
    label placement, user interfaces, stereoscopic displays, augmented reality, visual clutter, information layering
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-43009 (URN)10.1109/ISMAR.2008.4637341 (DOI)000260993200026 ()70649 (Lokalt ID)978-1-4244-2840-3 (ISBN)978-1-4244-2859-5 (ISBN)70649 (Arkivnummer)70649 (OAI)
    Konferens
    7th IEEE/ACM International Symposium on Mixed and Augmented Reality, Cambridge, UK, 15-18 Sept. 2008
    Tillgänglig från: 2009-10-10 Skapad: 2009-10-10 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    3. Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments
    Öppna denna publikation i ny flik eller fönster >>Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments
    2009 (Engelska)Ingår i: IEEE Symposium on 3D User Interfaces (3DUI), Lafayette (LA), USA: IEEE , 2009, s. 111-118Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15-30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. In-depth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.

    Ort, förlag, år, upplaga, sidor
    Lafayette (LA), USA: IEEE, 2009
    Nyckelord
    Label placement, user interfaces, stereoscopic displays, augmented reality, visual clutter, information layering
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-51073 (URN)10.1109/3DUI.2009.4811215 (DOI)
    Tillgänglig från: 2009-10-15 Skapad: 2009-10-15 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    4. Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments
    Öppna denna publikation i ny flik eller fönster >>Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments
    2009 (Engelska)Ingår i: International Symposium on Smart Graphics, Berlin / Heidelberg: Springer , 2009, s. 43-55Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    This paper reports on an experiment comparing label placement techniques in a dynamic virtual environment rendered on a stereoscopic display. The labeled objects are in motion, and thus labels need to continuously maintain separation for legibility. The results from our user study show that traditional label placement algorithms, which always strive for full label separation in the 2D view plane, produce motion that disturbs the user in a visual search task. Alternative algorithms maintaining separation in only one spatial dimension are rated less disturbing, even though several modifications are made to traditional algorithms for reducing the amount and salience of label motion. Maintaining depth separation of labels through stereoscopic disparity adjustments is judged theleast disturbing, while such separation yields similar user performance to traditional algorithms. These results are important in the design offuture 3D user interfaces, where disturbing or distracting motion due to object labeling should be avoided.

    Ort, förlag, år, upplaga, sidor
    Berlin / Heidelberg: Springer, 2009
    Serie
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 5531
    Nyckelord
    Label placement, user interfaces, stereoscopic displays, virtual reality, visual clutter
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-51074 (URN)10.1007/978-3-642-02115-2_4 (DOI)
    Tillgänglig från: 2009-10-15 Skapad: 2009-10-15 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    5. Detection Thresholds for Label Motion in Visually Cluttered Displays
    Öppna denna publikation i ny flik eller fönster >>Detection Thresholds for Label Motion in Visually Cluttered Displays
    2010 (Engelska)Ingår i: IEEE Virtual Reality Conference (VR), 2010, Piscataway, NJ, USA: IEEE , 2010, s. 203-206Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    While label placement algorithms are generally successful in managing visual clutter by preventing label overlap, they can also cause significant label movement in dynamic displays. This study investigates motion detection thresholds for various types of label movement in realistic and complex virtual environments, which can be helpful for designing less salient and disturbing algorithms. Our results show that label movement in stereoscopic depth is shown to be less noticeable than similar lateral monoscopic movement, inherent to 2D label placement algorithms. Furthermore, label movement can be introduced more readily into the visual periphery (over 15° eccentricity) because of reduced sensitivity in this region. Moreover, under the realistic viewing conditions that we used, motion of isolated labels is more easily detected than that of overlapping labels. This perhaps counterintuitive finding may be explained by visual masking due to the visual clutter arising from the label overlap. The quantitative description of the findings presented in this paper should be useful not only for label placement applications, but also for any cluttered AR or VR application in which designers wish to control the users’ visual attention, either making text labels more or less noticeable as needed.

    Ort, förlag, år, upplaga, sidor
    Piscataway, NJ, USA: IEEE, 2010
    Serie
    IEEE Virtual Reality Annual International Symposium, ISSN 1087-8270
    Nyckelord
    H.5.2 [Information Systems], User Interfaces, I.3 [Computing Methodologies], Computer Graphics
    Nationell ämneskategori
    Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-51741 (URN)10.1109/VR.2010.5444788 (DOI)000287516000034 ()978-1-4244-6237-7 (ISBN)978-1-4244-6236-0 (ISBN)
    Konferens
    IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20-24 March 2010
    Tillgänglig från: 2009-11-16 Skapad: 2009-11-16 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    Ladda ner fulltext (pdf)
    Stereoscopic Label Placement : Reducing Distraction and Ambiguity in Visually Cluttered Displays
    Ladda ner (pdf)
    Cover
  • 9.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Cooper, Matthew
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center, USA.
    Detection Thresholds for Label Motion in Visually Cluttered Displays2010Ingår i: IEEE Virtual Reality Conference (VR), 2010, Piscataway, NJ, USA: IEEE , 2010, s. 203-206Konferensbidrag (Refereegranskat)
    Abstract [en]

    While label placement algorithms are generally successful in managing visual clutter by preventing label overlap, they can also cause significant label movement in dynamic displays. This study investigates motion detection thresholds for various types of label movement in realistic and complex virtual environments, which can be helpful for designing less salient and disturbing algorithms. Our results show that label movement in stereoscopic depth is shown to be less noticeable than similar lateral monoscopic movement, inherent to 2D label placement algorithms. Furthermore, label movement can be introduced more readily into the visual periphery (over 15° eccentricity) because of reduced sensitivity in this region. Moreover, under the realistic viewing conditions that we used, motion of isolated labels is more easily detected than that of overlapping labels. This perhaps counterintuitive finding may be explained by visual masking due to the visual clutter arising from the label overlap. The quantitative description of the findings presented in this paper should be useful not only for label placement applications, but also for any cluttered AR or VR application in which designers wish to control the users’ visual attention, either making text labels more or less noticeable as needed.

  • 10.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Cooper, Matthew
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    NASA, Ames Research Center, USA.
    Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments2009Ingår i: International Symposium on Smart Graphics, Berlin / Heidelberg: Springer , 2009, s. 43-55Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper reports on an experiment comparing label placement techniques in a dynamic virtual environment rendered on a stereoscopic display. The labeled objects are in motion, and thus labels need to continuously maintain separation for legibility. The results from our user study show that traditional label placement algorithms, which always strive for full label separation in the 2D view plane, produce motion that disturbs the user in a visual search task. Alternative algorithms maintaining separation in only one spatial dimension are rated less disturbing, even though several modifications are made to traditional algorithms for reducing the amount and salience of label motion. Maintaining depth separation of labels through stereoscopic disparity adjustments is judged theleast disturbing, while such separation yields similar user performance to traditional algorithms. These results are important in the design offuture 3D user interfaces, where disturbing or distracting motion due to object labeling should be avoided.

  • 11.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Cooper, Matthew
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    NASA Ames Research Center, USA.
    Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments2009Ingår i: IEEE Symposium on 3D User Interfaces (3DUI), Lafayette (LA), USA: IEEE , 2009, s. 111-118Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15-30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. In-depth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.

  • 12.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Comparing disparity based label segregation in augmented and virtual reality2008Ingår i: ACM Symposium on Virtual Reality Software and Technology (VRST), New York, NY, USA: ACM , 2008, s. 285-286Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent work has shown that overlapping labels in far-field AR environments can be successfully segregated by remapping them to predefined stereoscopic depth layers. User performance was found to be optimal when setting the interlayer disparity to 5-10 arcmin. The current paper investigates to what extent this label segregation technique, label layering, is affected by important perceptual defects in AR such as registration errors and mismatches in accommodation, visual resolution and contrast. A virtual environment matched to a corresponding AR condition but lacking these problems showed a reduction in average response time by 10%. However, the performance pattern for different label layering parameters was not significantly different in the AR and VR environments, showing robustness of this label segregation technique against such perceptual issues.

  • 13.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center.
    Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality2008Ingår i: 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, 2008. ISMAR 2008. / [ed] Mark A. Livingston, Oliver Bimber, Hideo Saito, Piscataway, NJ, USA: IEEE , 2008, s. 143-152Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper describes a novel technique for segregating overlapping labels in stereoscopic see-through displays. The present study investigates the labeling of far-field objects, with distances ranging 100-120 m. At these distances the stereoscopic disparity difference between objects is below 1 arcmin, so labels rendered at the same distance as their corresponding objects appear as if on a flat layer in the display. This flattening is due to limitations of both display and human visual resolution. By remapping labels to predetermined depth layers on the optical path between the observer and the labeled object, an interlayer disparity ranging from 5 to 20 arcmin can be achieved for 5 overlapping labels. The present study evaluates the impact of such depth separation of superimposed layers, and found that a 5 arcmin interlayer disparity yields a significantly lower response time, over 20% on average, in a visual search task compared to correctly registering labels and objects in depth. Notably the performance does not improve when doubling the interlayer disparity to 10 arcmin and, surprisingly, the performance degrades significantly when again doubling the interlayer disparity to 20 arcmin, approximating the performance in situations with no interlayer disparity. These results confirm that our technique can be used to segregate overlapping labels in the far visual field, without the cost associated with traditional label placement algorithms.

  • 14.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    Human Systems Integration Division, NASA Ames Research Center.
    Managing Visual Clutter: A Generalized Technique for Label Segregation using Stereoscopic Disparity2008Ingår i: IEEE Virtual Reality Conference, 2008. VR '08., Los Alamitos, CA, USA: IEEE Computer Society, 2008, s. 169-176Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, "label layering", utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of overlap is reduced by four seconds or 24%. Our data show that the depth order of the labels must be correlated with the distance order of their corresponding objects. Since a random distribution of stereoscopic disparity in contrast impairs performance, the benefit is not solely due to the disparity-based image segregation. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting motion, symbology dimming or label size reduction.

  • 15.
    Peterson, Stephen D.
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Axholt, Magnus
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Ellis, Stephen R.
    NASA Ames Research Center.
    Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality2009Ingår i: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 33, nr 1, s. 23-33Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, label layering, utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of visual overlap is reduced by 4s or 24%. Our data show that the stereoscopically based depth order of the labels must be correlated with the distance order of their corresponding objects, for practical benefits. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting view plane motion, symbology dimming or label size reduction.

1 - 15 av 15
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf