liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Axholt, Magnus
Publications (10 of 17) Show all publications
Axholt, M., Skoglund, M. A., O’Connell, S. D., Cooper, M. D., Ellis, S. R. & Ynnerman, A. (2011). Accuracy of Eyepoint Estimation in Optical See-Through Head-Mounted Displays Using the Single Point Active Alignment Method. Paper presented at IEEE Virtual Reality Conference 2012, Orange County (CA), USA.
Open this publication in new window or tab >>Accuracy of Eyepoint Estimation in Optical See-Through Head-Mounted Displays Using the Single Point Active Alignment Method
Show others...
2011 (English)Conference paper, Published paper (Other academic)
Abstract [en]

This paper studies the accuracy of the estimated eyepoint of an Optical See-Through Head-Mounted Display (OST HMD) calibrated using the Single Point Active Alignment Method (SPAAM). Quantitative evaluation of calibration procedures for OST HMDs is complicated as it is currently not possible to share the subject’s view. Temporarily replacing the subject’s eye with a camera during the calibration or evaluation stage has been proposed, but the uncertainty of a correct eyepoint estimation remains. In the experiment reported in this paper, subjects were used for all stages of calibration and the results were verified with a 3D measurement device. The nine participants constructed 25 visual alignments per calibration after which the estimated pinhole camera model was decomposed into its intrinsic and extrinsic parameters using two common methods. Unique to this experiment, compared to previous evaluations, is the measurement device used to cup the subject’s eyeball. It measures the eyepoint location relative to the head tracker, thereby establishing the calibration accuracy of the estimated eyepoint location. As the results on accuracy are expressed as individual pinhole camera parameters, rather than a compounded registration error, this paper complements  previously published work on parameter variance as the former denotes bias and the latter represents noise. Results indicate that the calibrated eyepoint is on average 5 cm away from its measured location and exhibits a vertical bias which potentially causes dipvergence for stereoscopic vision for objects located further away than 5.6 m. Lastly, this paper closes with a discussion on the suitability of the traditional pinhole camera model for OST HMD calibration.

Keywords
Accuracy, Single Point Active Alignment Method, Visual Alignment, Calibration, Augmented Reality
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-72054 (URN)
Conference
IEEE Virtual Reality Conference 2012, Orange County (CA), USA
Available from: 2011-11-14 Created: 2011-11-14 Last updated: 2015-09-22Bibliographically approved
Axholt, M., Skoglund, M., O'Connell, S., Cooper, M., Ellis, S. & Ynnerman, A. (2011). Parameter Estimation Variance of the Single Point Active Alignment Method in Optical See-Through Head Mounted Display Calibration. In: Michitaka Hirose, Benjamin Lok, Aditi Majumder and Dieter Schmalstieg (Ed.), Proceedings of the IEEE Virtual Reality Conference. Paper presented at IEEE Virtual Reality Conference, pages 27–34, Singapore, Republic of Singapore (pp. 27-24). Piscataway, NJ, USA: IEEE
Open this publication in new window or tab >>Parameter Estimation Variance of the Single Point Active Alignment Method in Optical See-Through Head Mounted Display Calibration
Show others...
2011 (English)In: Proceedings of the IEEE Virtual Reality Conference / [ed] Michitaka Hirose, Benjamin Lok, Aditi Majumder and Dieter Schmalstieg, Piscataway, NJ, USA: IEEE , 2011, p. 27-24Conference paper, Published paper (Refereed)
Abstract [en]

The parameter estimation variance of the Single Point Active Alignment Method (SPAAM) is studied through an experiment where 11 subjects are instructed to create alignments using an Optical See-Through Head Mounted Display (OSTHMD) such that three separate correspondence point distributions are acquired. Modeling the OSTHMD and the subject's dominant eye as a pinhole camera, findings show that a correspondence point distribution well distributed along the user's line of sight yields less variant parameter estimates. The estimated eye point location is studied in particular detail. Thefindings of the experiment are complemented with simulated datawhich show that image plane orientation is sensitive to the numberof correspondence points. The simulated data also illustrates someinteresting properties on the numerical stability of the calibrationproblem as a function of alignment noise, number of correspondencepoints, and correspondence point distribution.

Place, publisher, year, edition, pages
Piscataway, NJ, USA: IEEE, 2011
Series
IEEE Virtual Reality Conference, ISSN 1087-8270
Keywords
single point active alignment method, camera resectioning, calibration, optical see-through head mounted display, augmented reality
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-67233 (URN)10.1109/VR.2011.5759432 (DOI)000297260400004 ()978-1-4577-0037-8 (online), 978-1-4577-0039-2 (print) (ISBN)
Conference
IEEE Virtual Reality Conference, pages 27–34, Singapore, Republic of Singapore
Available from: 2011-04-04 Created: 2011-04-04 Last updated: 2015-09-22Bibliographically approved
Axholt, M. (2011). Pinhole Camera Calibration in the Presence of Human Noise. (Doctoral dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Pinhole Camera Calibration in the Presence of Human Noise
2011 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The research work presented in this thesis is concerned with the analysis of the human body as a calibration platform for estimation of a pinhole camera model used in Augmented Reality environments mediated through Optical See-Through Head-Mounted Display. Since the quality of the calibration ultimately depends on a subject’s ability to construct visual alignments, the research effort is initially centered around user studies investigating human-induced noise, such as postural sway and head aiming precision. Knowledge about subject behavior is then applied to a sensitivity analysis in which simulations are used to determine the impact of user noise on camera parameter estimation.

Quantitative evaluation of the calibration procedure is challenging since the current state of the technology does not permit access to the user’s view and measurements in the image plane as seen by the user. In an attempt to circumvent this problem, researchers have previously placed a camera in the eye socket of a mannequin, and performed both calibration and evaluation using the auxiliary signal from the camera. However, such a method does not reflect the impact of human noise during the calibration stage, and the calibration is not transferable to a human as the eyepoint of the mannequin and the intended user may not coincide. The experiments performed in this thesis use human subjects for all stages of calibration and evaluation. Moreover, some of the measurable camera parameters are verified with an external reference, addressing not only calibration precision, but also accuracy.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2011. p. 113
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1402
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-72055 (URN)978-91-7393-053-6 (ISBN)
Public defence
2011-11-04, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:30 (English)
Opponent
Supervisors
Available from: 2011-11-14 Created: 2011-11-14 Last updated: 2019-12-19Bibliographically approved
Peterson, S. D., Axholt, M., Cooper, M. & Ellis, S. R. (2010). Detection Thresholds for Label Motion in Visually Cluttered Displays. In: IEEE Virtual Reality Conference (VR), 2010: . Paper presented at IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20-24 March 2010 (pp. 203-206). Piscataway, NJ, USA: IEEE
Open this publication in new window or tab >>Detection Thresholds for Label Motion in Visually Cluttered Displays
2010 (English)In: IEEE Virtual Reality Conference (VR), 2010, Piscataway, NJ, USA: IEEE , 2010, p. 203-206Conference paper, Published paper (Refereed)
Abstract [en]

While label placement algorithms are generally successful in managing visual clutter by preventing label overlap, they can also cause significant label movement in dynamic displays. This study investigates motion detection thresholds for various types of label movement in realistic and complex virtual environments, which can be helpful for designing less salient and disturbing algorithms. Our results show that label movement in stereoscopic depth is shown to be less noticeable than similar lateral monoscopic movement, inherent to 2D label placement algorithms. Furthermore, label movement can be introduced more readily into the visual periphery (over 15° eccentricity) because of reduced sensitivity in this region. Moreover, under the realistic viewing conditions that we used, motion of isolated labels is more easily detected than that of overlapping labels. This perhaps counterintuitive finding may be explained by visual masking due to the visual clutter arising from the label overlap. The quantitative description of the findings presented in this paper should be useful not only for label placement applications, but also for any cluttered AR or VR application in which designers wish to control the users’ visual attention, either making text labels more or less noticeable as needed.

Place, publisher, year, edition, pages
Piscataway, NJ, USA: IEEE, 2010
Series
IEEE Virtual Reality Annual International Symposium, ISSN 1087-8270
Keywords
H.5.2 [Information Systems], User Interfaces, I.3 [Computing Methodologies], Computer Graphics
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-51741 (URN)10.1109/VR.2010.5444788 (DOI)000287516000034 ()978-1-4244-6237-7 (ISBN)978-1-4244-6236-0 (ISBN)
Conference
IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20-24 March 2010
Available from: 2009-11-16 Created: 2009-11-16 Last updated: 2018-01-12Bibliographically approved
Axholt, M., Skoglund, M., Peterson, S., Cooper, M., Schön, T., Gustafsson, F., . . . Ellis, S. (2010). Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise. Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise
Show others...
2010 (English)Report (Other academic)
Abstract [en]

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2010. p. 7
Series
LiTH-ISY-R, ISSN 1400-3902 ; 3003
Keywords
Head-mounted display, Calibration, Direct linear transform, Robustness
National Category
Control Engineering
Identifiers
urn:nbn:se:liu:diva-97748 (URN)LiTH-ISY-R-3003 (ISRN)
Available from: 2013-09-23 Created: 2013-09-23 Last updated: 2015-09-22
Axholt, M., Skoglund, M., Peterson, S., Cooper, M., Schön, T., Gustafsson, F., . . . Ellis, S. (2010). Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise. In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society: . Paper presented at 54th Annual Meeting of the Human Factors and Ergonomics Society, San Francisco, USA, 27 September-1 October, 2010.
Open this publication in new window or tab >>Optical See-Through Head Mounted Display: Direct Linear Transformation Calibration Robustness in the Presence of User Alignment Noise
Show others...
2010 (English)In: Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society, 2010Conference paper, Published paper (Refereed)
Abstract [en]

The correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user’s eyepoint relative to the location and orientation of the display surface. A common approach is to estimate the display parameters through a calibration procedure involving a subjective alignment exercise. Human postural sway and targeting precision contribute to imprecise alignments, which in turn adversely affect the display parameter estimation resulting in registration errors between virtual and real objects. The technique commonly used has its origin incomputer vision, and calibrates stationary cameras using hundreds of correspondence points collected instantaneously in one video frame where precision is limited only by pixel quantization and image blur. Subsequently the input noise level is several order of magnitudes greater when a human operator manually collects correspondence points one by one. This paper investigates the effect of human alignment noise on view parameter estimation in an optical see-through head mounted display to determine how well astandard camera calibration method performs at greater noise levels than documented in computer vision literature. Through Monte-Carlo simulations we show that it is particularly difficult to estimate the user’s eyepoint in depth, but that a greater distribution of correspondence points in depth help mitigate the effects of human alignment noise.

Keywords
Head-mounted display, Calibration, Direct linear transform, Robustness
National Category
Control Engineering
Identifiers
urn:nbn:se:liu:diva-60435 (URN)9780945289371 (ISBN)
Conference
54th Annual Meeting of the Human Factors and Ergonomics Society, San Francisco, USA, 27 September-1 October, 2010
Available from: 2010-10-13 Created: 2010-10-13 Last updated: 2015-09-22Bibliographically approved
Peterson, S. D., Axholt, M., Cooper, M. & Ellis, S. R. (2009). Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments. In: International Symposium on Smart Graphics (pp. 43-55). Berlin / Heidelberg: Springer
Open this publication in new window or tab >>Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments
2009 (English)In: International Symposium on Smart Graphics, Berlin / Heidelberg: Springer , 2009, p. 43-55Conference paper, Published paper (Refereed)
Abstract [en]

This paper reports on an experiment comparing label placement techniques in a dynamic virtual environment rendered on a stereoscopic display. The labeled objects are in motion, and thus labels need to continuously maintain separation for legibility. The results from our user study show that traditional label placement algorithms, which always strive for full label separation in the 2D view plane, produce motion that disturbs the user in a visual search task. Alternative algorithms maintaining separation in only one spatial dimension are rated less disturbing, even though several modifications are made to traditional algorithms for reducing the amount and salience of label motion. Maintaining depth separation of labels through stereoscopic disparity adjustments is judged theleast disturbing, while such separation yields similar user performance to traditional algorithms. These results are important in the design offuture 3D user interfaces, where disturbing or distracting motion due to object labeling should be avoided.

Place, publisher, year, edition, pages
Berlin / Heidelberg: Springer, 2009
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 5531
Keywords
Label placement, user interfaces, stereoscopic displays, virtual reality, visual clutter
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-51074 (URN)10.1007/978-3-642-02115-2_4 (DOI)
Available from: 2009-10-15 Created: 2009-10-15 Last updated: 2018-01-12Bibliographically approved
Peterson, S. D., Axholt, M. & Ellis, S. R. (2009). Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality. Computers & graphics, 33(1), 23-33
Open this publication in new window or tab >>Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality
2009 (English)In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 33, no 1, p. 23-33Article in journal (Refereed) Published
Abstract [en]

We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, label layering, utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of visual overlap is reduced by 4s or 24%. Our data show that the stereoscopically based depth order of the labels must be correlated with the distance order of their corresponding objects, for practical benefits. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting view plane motion, symbology dimming or label size reduction.

Place, publisher, year, edition, pages
Elsevier, 2009
Keywords
Label placement, User interfaces, Visual clutter, Augmented reality, Air traffic control
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-17515 (URN)10.1016/j.cag.2008.11.006 (DOI)
Available from: 2009-03-27 Created: 2009-03-27 Last updated: 2018-01-13Bibliographically approved
Axholt, M., Peterson, S. D. & Ellis, S. R. (2009). Visual Alignment Accuracy in Head Mounted Optical See-Through AR Displays: Distribution of Head Orientation Noise. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009: . Paper presented at 53rd Human Factors and Ergonomics Society Annual Meeting 2009, HFES 2009; San Antonio, TX; United States (pp. 2024-2028). San Antonio (TX), USA: Human Factors and Ergonomics Society
Open this publication in new window or tab >>Visual Alignment Accuracy in Head Mounted Optical See-Through AR Displays: Distribution of Head Orientation Noise
2009 (English)In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting 2009, San Antonio (TX), USA: Human Factors and Ergonomics Society , 2009, p. 2024-2028Conference paper, Published paper (Refereed)
Abstract [en]

The mitigation of registration errors is a central challenge for improving the usability of AugmentedReality systems. While the technical achievements within tracking and display technology continue toimprove the conditions for good registration, little research is directed towards understanding theuser’s visual alignment performance during the calibration process. This paper reports 12 standingsubjects’ visual alignment performance using an optical see-through head mounted display for viewingdirections varied in azimuth (0°, ±30°, ±60°) and elevation (0°, ±10°). Although viewing direction hasa statistically significant effect on the shape of the distribution, the effect is small and negligible forpractical purposes and can be approximated to a circular distribution with a standard deviation of 0.2°for all viewing directions studied in this paper. In addition to quantifying head aiming accuracy with ahead fixed cursor and illustrating the deteriorating accuracy of boresight calibration with increasingviewing direction extremity, the results are applicable for filter design determining the onset and end ofhead rotation.

Place, publisher, year, edition, pages
San Antonio (TX), USA: Human Factors and Ergonomics Society, 2009
National Category
Computer Engineering
Identifiers
urn:nbn:se:liu:diva-52854 (URN)10.1177/154193120905302710 (DOI)978-161567623-1 (ISBN)
Conference
53rd Human Factors and Ergonomics Society Annual Meeting 2009, HFES 2009; San Antonio, TX; United States
Available from: 2010-01-12 Created: 2010-01-12 Last updated: 2018-01-12Bibliographically approved
Axholt, M., Peterson, S. D. & Ellis, S. R. (2009). Visual Alignment Precision in Optical See - Through AR Displays: Implications for Potential Accuracy. In: Proceedings of the ACM/IEEE Virtual Reality International Conference: . Paper presented at ACM/IEEE Virtual Reality International Conference, 2009. Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Visual Alignment Precision in Optical See - Through AR Displays: Implications for Potential Accuracy
2009 (English)In: Proceedings of the ACM/IEEE Virtual Reality International Conference, Association for Computing Machinery (ACM), 2009Conference paper, Published paper (Other academic)
Abstract [en]

The quality of visual registration achievable with anoptical see-through head mounted display (HMD)ultimately depends on the user’s targetingprecision. This paper presents design guidelines forcalibration procedures based on measurements ofusers’ head stability during visual alignment withreference targets. Targeting data was collected from12 standing subjects who aligned a head fixedcursor presented in a see-through HMD withbackground targets that varied in azimuth (0°, ±30°,±60°) and elevation (0°, ±10°). Their data showedthat: 1) Both position and orientation data will needto be used to establish calibrations based on nearbyreference targets since eliminating body swayeffects can improve calibration precision by a factorof 16 and eliminate apparent angular anisotropies.2) Compensation for body sway can speed thecalibration by removing the need to wait for thebody sway to abate, and 3) calibration precision canbe less than 2 arcmin even for head directionsrotated up to 60° with respect to the user’s torsoprovided body sway is corrected. Users ofAugmented Reality (AR) applications overlookinglarge distances may avoid the need to correct forbody sway by boresighting on markers at relativelylong distances, >> 10 m. These recommendationscontrast with those for heads up displays using realimages as discussed in previous papers.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2009
National Category
Computer Engineering
Identifiers
urn:nbn:se:liu:diva-52848 (URN)
Conference
ACM/IEEE Virtual Reality International Conference, 2009
Available from: 2010-01-12 Created: 2010-01-12 Last updated: 2018-01-12Bibliographically approved
Organisations

Search in DiVA

Show all publications