liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toolset for Run-time Dataset Collection of Deep-scene Information
Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-1367-1594
2020 (English)In: Symposium on Modelling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), Springer, 2020, p. 224-236Conference paper, Published paper (Refereed)
Abstract [en]

Virtual reality (VR) provides many exciting new application opportunities, but also present new challenges. In contrast to 360° videos that only allow a user to select its viewing direction, in fully immersive VR, users can also move around and interact with objects in the virtual world. To most effectively deliver such services it is therefore important to understand how users move around in relation to such objects. In this paper, we present a methodology and software tool for generating run-time datasets capturing a user’s interactions with such 3D environments, evaluate and compare different object identification methods that we implement within the tool, and use datasets collected with the tool to demonstrate example uses. The tool was developed in Unity, easily integrates with existing Unity applications through the use of periodic calls that extracts information about the environment using different ray-casting methods. The software tool and example datasets are made available with this paper. 

Place, publisher, year, edition, pages
Springer, 2020. p. 224-236
Keywords [en]
Deep-scene data collection, Virtual reality, Unity, Light-weight ray-casting
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-179794DOI: 10.1007/978-3-030-68110-4_15Scopus ID: 2-s2.0-85101849782ISBN: 9783030681098 (print)OAI: oai:DiVA.org:liu-179794DiVA, id: diva2:1599787
Conference
28th International Symposium, MASCOTS 2020, Nice, France, November 17–19, 2020
Funder
Swedish Research CouncilAvailable from: 2021-10-01 Created: 2021-10-01 Last updated: 2021-10-29Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Aaro, GustavRoos, DanielCarlsson, Niklas

Search in DiVA

By author/editor
Aaro, GustavRoos, DanielCarlsson, Niklas
By organisation
Database and information techniquesFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 313 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf