liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Peterson, Stephen D.
Alternative names
Publications (10 of 15) Show all publications
Peterson, S. D., Axholt, M., Cooper, M. & Ellis, S. R. (2010). Detection Thresholds for Label Motion in Visually Cluttered Displays. In: IEEE Virtual Reality Conference (VR), 2010: . Paper presented at IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20-24 March 2010 (pp. 203-206). Piscataway, NJ, USA: IEEE
Open this publication in new window or tab >>Detection Thresholds for Label Motion in Visually Cluttered Displays
2010 (English)In: IEEE Virtual Reality Conference (VR), 2010, Piscataway, NJ, USA: IEEE , 2010, p. 203-206Conference paper, Published paper (Refereed)
Abstract [en]

While label placement algorithms are generally successful in managing visual clutter by preventing label overlap, they can also cause significant label movement in dynamic displays. This study investigates motion detection thresholds for various types of label movement in realistic and complex virtual environments, which can be helpful for designing less salient and disturbing algorithms. Our results show that label movement in stereoscopic depth is shown to be less noticeable than similar lateral monoscopic movement, inherent to 2D label placement algorithms. Furthermore, label movement can be introduced more readily into the visual periphery (over 15° eccentricity) because of reduced sensitivity in this region. Moreover, under the realistic viewing conditions that we used, motion of isolated labels is more easily detected than that of overlapping labels. This perhaps counterintuitive finding may be explained by visual masking due to the visual clutter arising from the label overlap. The quantitative description of the findings presented in this paper should be useful not only for label placement applications, but also for any cluttered AR or VR application in which designers wish to control the users’ visual attention, either making text labels more or less noticeable as needed.

Place, publisher, year, edition, pages
Piscataway, NJ, USA: IEEE, 2010
Series
IEEE Virtual Reality Annual International Symposium, ISSN 1087-8270
Keywords
H.5.2 [Information Systems], User Interfaces, I.3 [Computing Methodologies], Computer Graphics
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-51741 (URN)10.1109/VR.2010.5444788 (DOI)000287516000034 ()978-1-4244-6237-7 (ISBN)978-1-4244-6236-0 (ISBN)
Conference
IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20-24 March 2010
Available from: 2009-11-16 Created: 2009-11-16 Last updated: 2018-01-12Bibliographically approved
Axholt, M., Skoglund, M., Peterson, S., Cooper, M., Schön, T., Gustafsson, F., . . . Ellis, S. (2010). Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise. Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise
Show others...
2010 (English)Report (Other academic)
Abstract [en]

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2010. p. 7
Series
LiTH-ISY-R, ISSN 1400-3902 ; 3003
Keywords
Head-mounted display, Calibration, Direct linear transform, Robustness
National Category
Control Engineering
Identifiers
urn:nbn:se:liu:diva-97748 (URN)LiTH-ISY-R-3003 (ISRN)
Available from: 2013-09-23 Created: 2013-09-23 Last updated: 2015-09-22
Axholt, M., Skoglund, M., Peterson, S., Cooper, M., Schön, T., Gustafsson, F., . . . Ellis, S. (2010). Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise. In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society: . Paper presented at 54th Annual Meeting of the Human Factors and Ergonomics Society, San Francisco, USA, 27 September-1 October, 2010.
Open this publication in new window or tab >>Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise
Show others...
2010 (English)In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, 2010Conference paper, Published paper (Refereed)
Abstract [en]

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

Keywords
Head-mounted display, Calibration, Direct linear transform, Robustness
National Category
Control Engineering
Identifiers
urn:nbn:se:liu:diva-60435 (URN)9780945289371 (ISBN)
Conference
54th Annual Meeting of the Human Factors and Ergonomics Society, San Francisco, USA, 27 September-1 October, 2010
Available from: 2010-10-13 Created: 2010-10-13 Last updated: 2015-09-22Bibliographically approved
Peterson, S. D., Axholt, M., Cooper, M. & Ellis, S. R. (2009). Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments. In: International Symposium on Smart Graphics (pp. 43-55). Berlin / Heidelberg: Springer
Open this publication in new window or tab >>Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments
2009 (English)In: International Symposium on Smart Graphics, Berlin / Heidelberg: Springer , 2009, p. 43-55Conference paper, Published paper (Refereed)
Abstract [en]

This paper reports on an experiment comparing label placement techniques in a dynamic virtual environment rendered on a stereoscopic display. The labeled objects are in motion, and thus labels need to continuously maintain separation for legibility. The results from our user study show that traditional label placement algorithms, which always strive for full label separation in the 2D view plane, produce motion that disturbs the user in a visual search task. Alternative algorithms maintaining separation in only one spatial dimension are rated less disturbing, even though several modifications are made to traditional algorithms for reducing the amount and salience of label motion. Maintaining depth separation of labels through stereoscopic disparity adjustments is judged theleast disturbing, while such separation yields similar user performance to traditional algorithms. These results are important in the design offuture 3D user interfaces, where disturbing or distracting motion due to object labeling should be avoided.

Place, publisher, year, edition, pages
Berlin / Heidelberg: Springer, 2009
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 5531
Keywords
Label placement, user interfaces, stereoscopic displays, virtual reality, visual clutter
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-51074 (URN)10.1007/978-3-642-02115-2_4 (DOI)
Available from: 2009-10-15 Created: 2009-10-15 Last updated: 2018-01-12Bibliographically approved
Peterson, S. D., Axholt, M. & Ellis, S. R. (2009). Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality. Computers & graphics, 33(1), 23-33
Open this publication in new window or tab >>Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality
2009 (English)In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 33, no 1, p. 23-33Article in journal (Refereed) Published
Abstract [en]

We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, label layering, utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of visual overlap is reduced by 4s or 24%. Our data show that the stereoscopically based depth order of the labels must be correlated with the distance order of their corresponding objects, for practical benefits. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting view plane motion, symbology dimming or label size reduction.

Place, publisher, year, edition, pages
Elsevier, 2009
Keywords
Label placement, User interfaces, Visual clutter, Augmented reality, Air traffic control
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-17515 (URN)10.1016/j.cag.2008.11.006 (DOI)
Available from: 2009-03-27 Created: 2009-03-27 Last updated: 2018-01-13Bibliographically approved
Peterson, S. D. (2009). Stereoscopic Label Placement: Reducing Distraction and Ambiguity in Visually Cluttered Displays. (Doctoral dissertation). Linköping: Linköping University Electroic Press
Open this publication in new window or tab >>Stereoscopic Label Placement: Reducing Distraction and Ambiguity in Visually Cluttered Displays
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

With increasing information density and complexity, computer displays may become visually cluttered, adversely affecting overall usability. Text labels can significantly add to visual clutter in graphical user interfaces, but are generally kept legible through specific label placement algorithms that seek visual separation of labels and other objects in the 2D view plane. This work studies an alternative approach: can overlapping labels be visually segregated by distributing them in stereoscopic depth? The fact that we have two forward-looking eyes yields stereoscopic disparity: each eye has a slightly different perspective on objects in the visual field. Disparity is used for depth perception by the human visual system, and is therefore also provided by stereoscopic 3D displays to produce a sense of depth.

This work has shown that a stereoscopic label placement algorithm yields user performance comparable with existing algorithms that separate labels in the view plane. At the same time, such stereoscopic label placement is subjectively rated significantly less disturbing than traditional methods. Furthermore, it does not allow for potentially ambiguous spatial relationships between labels and background objects inherent to labels separated in the view plane. These findings are important for display systems where disturbance, distraction and ambiguity of the overlay can negatively impact safety and efficiency of the system, including the reference application of this work: an augmented vision system for Air Traffic Control towers.

Place, publisher, year, edition, pages
Linköping: Linköping University Electroic Press, 2009. p. 75
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1293
Keywords
Label placement, user interfaces, stereoscopic displays, augmented reality, virtual reality, visual clutter
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-51742 (URN)978-91-7393-469-5 (ISBN)
Public defence
2009-12-18, K3, Kåkenhus, Campus Norrköping, Linköpings universitet, Norrköping, 14:15 (English)
Opponent
Supervisors
Available from: 2009-12-07 Created: 2009-11-16 Last updated: 2020-02-19Bibliographically approved
Axholt, M., Peterson, S. D. & Ellis, S. R. (2009). Visual Alignment Accuracy in Head Mounted Optical See-Through AR Displays: Distribution of Head Orientation Noise. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009: . Paper presented at 53rd Human Factors and Ergonomics Society Annual Meeting 2009, HFES 2009; San Antonio, TX; United States (pp. 2024-2028). San Antonio (TX), USA: Human Factors and Ergonomics Society
Open this publication in new window or tab >>Visual Alignment Accuracy in Head Mounted Optical See-Through AR Displays: Distribution of Head Orientation Noise
2009 (English)In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009, San Antonio (TX), USA: Human Factors and Ergonomics Society , 2009, p. 2024-2028Conference paper, Published paper (Refereed)
Abstract [en]

The mitigation of registration errors is a central challenge for improving the usability of AugmentedReality systems. While the technical achievements within tracking and display technology continue toimprove the conditions for good registration, little research is directed towards understanding theuser’s visual alignment performance during the calibration process. This paper reports 12 standingsubjects’ visual alignment performance using an optical see-through head mounted display for viewingdirections varied in azimuth (0°, ±30°, ±60°) and elevation (0°, ±10°). Although viewing direction hasa statistically significant effect on the shape of the distribution, the effect is small and negligible forpractical purposes and can be approximated to a circular distribution with a standard deviation of 0.2°for all viewing directions studied in this paper. In addition to quantifying head aiming accuracy with ahead fixed cursor and illustrating the deteriorating accuracy of boresight calibration with increasingviewing direction extremity, the results are applicable for filter design determining the onset and end ofhead rotation.

Place, publisher, year, edition, pages
San Antonio (TX), USA: Human Factors and Ergonomics Society, 2009
National Category
Computer Engineering
Identifiers
urn:nbn:se:liu:diva-52854 (URN)10.1177/154193120905302710 (DOI)978-161567623-1 (ISBN)
Conference
53rd Human Factors and Ergonomics Society Annual Meeting 2009, HFES 2009; San Antonio, TX; United States
Available from: 2010-01-12 Created: 2010-01-12 Last updated: 2018-01-12Bibliographically approved
Axholt, M., Peterson, S. D. & Ellis, S. R. (2009). Visual Alignment Precision in Optical See - Through AR Displays: Implications for Potential Accuracy. In: Proceedings of the ACM/IEEE Virtual Reality International Conference: . Paper presented at ACM/IEEE Virtual Reality International Conference, 2009. Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Visual Alignment Precision in Optical See - Through AR Displays: Implications for Potential Accuracy
2009 (English)In: Proceedings of the ACM/IEEE Virtual Reality International Conference, Association for Computing Machinery (ACM), 2009Conference paper, Published paper (Other academic)
Abstract [en]

The quality of visual registration achievable with anoptical see-through head mounted display (HMD)ultimately depends on the user’s targetingprecision. This paper presents design guidelines forcalibration procedures based on measurements ofusers’ head stability during visual alignment withreference targets. Targeting data was collected from12 standing subjects who aligned a head fixedcursor presented in a see-through HMD withbackground targets that varied in azimuth (0°, ±30°,±60°) and elevation (0°, ±10°). Their data showedthat: 1) Both position and orientation data will needto be used to establish calibrations based on nearbyreference targets since eliminating body swayeffects can improve calibration precision by a factorof 16 and eliminate apparent angular anisotropies.2) Compensation for body sway can speed thecalibration by removing the need to wait for thebody sway to abate, and 3) calibration precision canbe less than 2 arcmin even for head directionsrotated up to 60° with respect to the user’s torsoprovided body sway is corrected. Users ofAugmented Reality (AR) applications overlookinglarge distances may avoid the need to correct forbody sway by boresighting on markers at relativelylong distances, >> 10 m. These recommendationscontrast with those for heads up displays using realimages as discussed in previous papers.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2009
National Category
Computer Engineering
Identifiers
urn:nbn:se:liu:diva-52848 (URN)
Conference
ACM/IEEE Virtual Reality International Conference, 2009
Available from: 2010-01-12 Created: 2010-01-12 Last updated: 2018-01-12Bibliographically approved
Peterson, S. D., Axholt, M., Cooper, M. & Ellis, S. R. (2009). Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments. In: IEEE Symposium on 3D User Interfaces (3DUI) (pp. 111-118). Lafayette (LA), USA: IEEE
Open this publication in new window or tab >>Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments
2009 (English)In: IEEE Symposium on 3D User Interfaces (3DUI), Lafayette (LA), USA: IEEE , 2009, p. 111-118Conference paper, Published paper (Refereed)
Abstract [en]

This paper reports an experiment comparing three label separation methods for reducing visual clutter in Augmented Reality (AR) displays. We contrasted two common methods of avoiding visual overlap by moving labels in the 2D view plane with a third that distributes overlapping labels in stereoscopic depth. The experiment measured user identification performance during spatial judgment tasks in static scenes. The threemethods were compared with a control condition in which no label separation method was employed. The results showed significant performance improvements, generally 15-30%, for all three methods over the control; however, these methods were statistically indistinguishable from each other. In-depth analysis showed significant performance degradation when the 2D view plane methods produced potentially confusing spatial correlations between labels and the markers they designate. Stereoscopically separated labels were subjectively judged harder to read than view-plane separated labels. Since measured performance was affected both by label legibility and spatial correlation of labels and their designated objects, it is likely that the improved spatial correlation of stereoscopically separated labels and their designated objects has compensated for poorer stereoscopic text legibility. Future testing with dynamic scenes is expected to more clearly distinguish the three label separation techniques.

Place, publisher, year, edition, pages
Lafayette (LA), USA: IEEE, 2009
Keywords
Label placement, user interfaces, stereoscopic displays, augmented reality, visual clutter, information layering
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-51073 (URN)10.1109/3DUI.2009.4811215 (DOI)
Available from: 2009-10-15 Created: 2009-10-15 Last updated: 2018-01-12Bibliographically approved
Peterson, S. D., Axholt, M. & Ellis, S. R. (2008). Comparing disparity based label segregation in augmented and virtual reality. In: ACM Symposium on Virtual Reality Software and Technology (VRST) (pp. 285-286). New York, NY, USA: ACM
Open this publication in new window or tab >>Comparing disparity based label segregation in augmented and virtual reality
2008 (English)In: ACM Symposium on Virtual Reality Software and Technology (VRST), New York, NY, USA: ACM , 2008, p. 285-286Conference paper, Published paper (Refereed)
Abstract [en]

Recent work has shown that overlapping labels in far-field AR environments can be successfully segregated by remapping them to predefined stereoscopic depth layers. User performance was found to be optimal when setting the interlayer disparity to 5-10 arcmin. The current paper investigates to what extent this label segregation technique, label layering, is affected by important perceptual defects in AR such as registration errors and mismatches in accommodation, visual resolution and contrast. A virtual environment matched to a corresponding AR condition but lacking these problems showed a reduction in average response time by 10%. However, the performance pattern for different label layering parameters was not significantly different in the AR and VR environments, showing robustness of this label segregation technique against such perceptual issues.

Place, publisher, year, edition, pages
New York, NY, USA: ACM, 2008
Keywords
label placement, mixed reality, stereoscopic displays, user interfaces, visual clutter
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-43264 (URN)10.1145/1450579.1450655 (DOI)73242 (Local ID)73242 (Archive number)73242 (OAI)
Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-12
Organisations

Search in DiVA

Show all publications