liu.seSearch for publications in DiVA
Change search
ReferencesLink to record
Permanent link

Direct link
A Crowdsourcing System for Integrated and Reproducible Evaluation in Scientific Visualization
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (Scientific Visualization Group)ORCID iD: 0000-0002-5693-2830
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (Scientific Visualization Group)
Visual Computing Research Group, Ulm University.
2016 (English)In: 2016 IEEE Pacific Visualization Symposium (PacificVis), IEEE Computer Society, 2016, 40-47 p.Conference paper (Refereed)
Abstract [en]

User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant recruitment and result reporting. While it could be shown that the quality of evaluation results and the simplified participant recruitment of crowdsourcing platforms makes this technology a viable alternative to lab experiments when evaluating visualizations, the time for conducting and reporting such evaluations is still very high. In this paper, we propose a software system, which integrates the conduction, the analysis and the reporting of crowdsourced user evaluations directly into the scientific visualization development process. With the proposed system, researchers can conduct and analyze quantitative evaluations on a large scale through an evaluation-centric user interface with only a few mouse clicks. Thus, it becomes possible to perform iterative evaluations during algorithm design, which potentially leads to better results, as compared to the time consuming user evaluations traditionally conducted at the end of the design process. Furthermore, the system is built around a centralized database, which supports an easy reuse of old evaluation designs and the reproduction of old evaluations with new or additional stimuli, which are both driving challenges in scientific visualization research. We will describe the system's design and the considerations made during the design process, and demonstrate the system by conducting three user evaluations, all of which have been published before in the visualization literature.

Place, publisher, year, edition, pages
IEEE Computer Society, 2016. 40-47 p.
National Category
Other Computer and Information Science
URN: urn:nbn:se:liu:diva-128702DOI: 10.1109/PACIFICVIS.2016.7465249ISBN: 9781509014514OAI: diva2:931644
Pacific Visualization Symposium (PacificVis. 19-22 April 2016, Taipei, Taiwan
Available from: 2016-05-30 Created: 2016-05-30 Last updated: 2016-06-14Bibliographically approved

Open Access in DiVA

fulltext(5472 kB)29 downloads
File information
File name FULLTEXT01.pdfFile size 5472 kBChecksum SHA-512
Type fulltextMimetype application/pdf
Appendix(2373 kB)9 downloads
File information
File name IMAGE01.pdfFile size 2373 kBChecksum SHA-512
Type imageMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Englund, RickardKottravel, Sathish
By organisation
Media and Information TechnologyFaculty of Science & Engineering
Other Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 29 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 50 hits
ReferencesLink to record
Permanent link

Direct link