liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 78
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bae, S. Sandra
    et al.
    Univ Colorado, CO 80309 USA.
    Fujiwara, Takanori
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Do, Ellen Yi-Luen
    Univ Colorado, CO 80309 USA.
    Rivera, Michael L.
    Univ Colorado, CO 80309 USA.
    Szafir, Danielle Albers
    Univ North Carolina Chapel Hill, NC USA.
    A Computational Design Pipeline to Fabricate Sensing Network Physicalizations2024In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 30, no 1, p. 913-923Article in journal (Refereed)
    Abstract [en]

    Interaction is critical for data analysis and sensemaking. However, designing interactive physicalizations is challenging as it requires cross-disciplinary knowledge in visualization, fabrication, and electronics. Interactive physicalizations are typically produced in an unstructured manner, resulting in unique solutions for a specific dataset, problem, or interaction that cannot be easily extended or adapted to new scenarios or future physicalizations. To mitigate these challenges, we introduce a computational design pipeline to 3D print network physicalizations with integrated sensing capabilities. Networks are ubiquitous, yet their complex geometry also requires significant engineering considerations to provide intuitive, effective interactions for exploration. Using our pipeline, designers can readily produce network physicalizations supporting selection-the most critical atomic operation for interaction-by touch through capacitive sensing and computational inference. Our computational design pipeline introduces a new design paradigm by concurrently considering the form and interactivity of a physicalization into one cohesive fabrication workflow. We evaluate our approach using (i) computational evaluations, (ii) three usage scenarios focusing on general visualization tasks, and (iii) expert interviews. The design paradigm introduced by our pipeline can lower barriers to physicalization research, creation, and adoption.

  • 2.
    Bladin, Kalle
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Axelsson, Emil
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Broberg, Erik
    Linköping University, Faculty of Science & Engineering.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Bock, Alexander
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering. NYU, NY 10003 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. 802-811Article in journal (Refereed)
    Abstract [en]

    Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes. such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters. interactive touch tables. and virtual reality headsets.

    Download full text (pdf)
    fulltext
  • 3.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of Utah, USA.
    Axelsson, Emil
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Costa, Jonathas
    New York University, USA.
    Payne, Gene
    University of Utah, USA.
    Acinapura, Micah
    American Museum of Natural History, USA.
    Trakinski, Vivian
    American Museum of Natural History, USA.
    Emmart, Carter
    American Museum of Natural History, USA.
    Silva, Cláudio
    New York University, USA.
    Hansen, Charles
    University of Utah, USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). University of Utah, USA.
    OpenSpace: A System for Astrographics2020In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 26, no 1, p. 633-642Article in journal (Refereed)
    Abstract [en]

    Human knowledge about the cosmos is rapidly increasing as instruments and simulations are generating new data supporting the formation of theory and understanding of the vastness and complexity of the universe. OpenSpace is a software system that takes on the mission of providing an integrated view of all these sources of data and supports interactive exploration of the known universe from the millimeter scale showing instruments on spacecrafts to billions of light years when visualizing the early universe. The ambition is to support research in astronomy and space exploration, science communication at museums and in planetariums as well as bringing exploratory astrographics to the class room. There is a multitude of challenges that need to be met in reaching this goal such as the data variety, multiple spatio-temporal scales, collaboration capabilities, etc. Furthermore, the system has to be flexible and modular to enable rapid prototyping and inclusion of new research results or space mission data and thereby shorten the time from discovery to dissemination. To support the different use cases the system has to be hardware agnostic and support a range of platforms and interaction paradigms. In this paper we describe how OpenSpace meets these challenges in an open source effort that is paving the path for the next generation of interactive astrographics.

    Download full text (pdf)
    OpenSpace: A System for Astrographics
  • 4.
    Bock, Alexander
    et al.
    New York University, USA.
    Doraiswamy,, Harish
    New York University, USA.
    Silva, Claudio
    New York University, USA.
    Summers, Adam
    University of Washington, USA.
    TopoAngler: Interactive Topology-Based Extraction of Fishes2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. 812-821Article in journal (Refereed)
    Abstract [en]

    We present TopoAngler, a visualization framework that enables an interactive user-guided segmentation of fishes contained in a micro-CT scan. The inherent noise in the CT scan coupled with the often disconnected (and sometimes broken) skeletal structure of fishes makes an automatic segmentation of the volume impractical. To overcome this, our framework combines techniques from computational topology with an interactive visual interface, enabling the human-in-the-Ioop to effectively extract fishes from the volume. In the first step, the join tree of the input is used to create a hierarchical segmentation of the volume. Through the use of linked views, the visual interface then allows users to interactively explore this hierarchy, and gather parts of individual fishes into a coherent sub-volume, thus reconstructing entire fishes. Our framework was primarily developed for its application to CT scans of fishes, generated as part of the ScanAllFish project, through close collaboration with their lead scientist. However, we expect it to also be applicable in other biological applications where a single dataset contains multiple specimen; a common routine that is now widely followed in laboratories to increase throughput of expensive CT scanners.

    Download full text (pdf)
    fulltext
  • 5.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Liu, Bingchen
    University of Auckland, New Zealand .
    Wuensche, Burkhard
    University of Auckland, New Zealand .
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Coherency-Based Curve Compression for High-Order Finite Element Model Visualization2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2315-2324Article in journal (Refereed)
    Abstract [en]

    Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.

  • 6.
    Brodersen, Anders
    et al.
    University of Aarhus.
    Museth, Ken
    Digital Domain.
    Porumbescu, Serban
    University of California.
    Budge, Brian
    University of California.
    Geometric Texturing Using Level Sets2008In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 14, no 2, p. 277-288Article in journal (Refereed)
    Abstract [en]

    We present techniques for warping and blending (or subtracting) geometric textures onto surfaces represented by high resolution level sets. The geometric texture itself can be represented either explicitly as a polygonal mesh or implicitly as a level set. Unlike previous approaches, we can produce topologically connected surfaces with smooth blending and low distortion. Specifically, we offer two different solutions to the problem of adding fine-scale geometric detail to surfaces. Both solutions assume a level set representation of the base surface which is easily achieved by means of a mesh-to-level-set scan conversion. To facilitate our mapping, we parameterize the embedding space of the base level set surface using fast particle advection. We can then warp explicit texture meshes onto this surface at nearly interactive speeds or blend level set representations of the texture to produce high-quality surfaces with smooth transitions.

  • 7.
    Bruckner, Stefan
    et al.
    Department of Informatics, University of Bergen, Bergen, Norway.
    Isenberg, Tobias
    AVIZ, INRIA, Saclay, France.
    Ropinski, Timo
    Institute of Media Informatics / Visual Computing Research Group, Ulm University, Ulm, Germany.
    Wiebel, Alexander
    Department of Computer Science, Hochschule Worms, 52788 Worms, Germany.
    A Model of Spatial Directness in Interactive Visualization2019In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 25, no 8, p. 2514-2528Article in journal (Refereed)
    Abstract [en]

    We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.

  • 8.
    Bujack, Roxana
    et al.
    Leipzig University, Leipzig, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Scheuermann, Gerik
    Leipzig University, Leipzig, Germany.
    Hitzer, Eckhard
    International Christian University, Tokyo, Japan.
    Moment Invariants for 2D Flow Fields Using Normalization in Detail2015In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 21, no 8, p. 916-929Article in journal (Refereed)
    Abstract [en]

    The analysis of 2D flow data is often guided by the search for characteristic structures with semantic meaning. One way to approach this question is to identify structures of interest by a human observer, with the goal of finding similar structures in the same or other datasets. The major challenges related to this task are to specify the notion of similarity and define respective pattern descriptors. While the descriptors should be invariant to certain transformations, such as rotation and scaling, they should provide a similarity measure with respect to other transformations, such as deformations. In this paper, we propose to use moment invariants as pattern descriptors for flow fields. Moment invariants are one of the most popular techniques for the description of objects in the field of image recognition. They have recently also been applied to identify 2D vector patterns limited to the directional properties of flow fields. Moreover, we discuss which transformations should be considered for the application to flow analysis. In contrast to previous work, we follow the intuitive approach of moment normalization, which results in a complete and independent set of translation, rotation, and scaling invariant flow field descriptors. They also allow to distinguish flow features with different velocity profiles. We apply the moment invariants in a pattern recognition algorithm to a real world dataset and show that the theoretical results can be extended to discrete functions in a robust way.

  • 9.
    Chatzimparmpas, Angelos
    et al.
    Department of Computer Science and Media Technology, Linnaeus University, Växjö, Sweden.
    Martins, Rafael Messias
    Department of Computer Science and Media Technology, Linnaeus University, Växjö, Sweden.
    Kucher, Kostiantyn
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Department of Computer Science and Media Technology, Linnaeus University, Växjö, Sweden.
    Kerren, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Department of Computer Science and Media Technology, Linnaeus University, Växjö, Sweden.
    FeatureEnVi: Visual Analytics for Feature Engineering Using Stepwise Selection and Semi-Automatic Extraction Approaches2022In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, no 4, p. 1773-1791Article in journal (Refereed)
    Abstract [en]

    The machine learning (ML) life cycle involves a series of iterative steps, from the effective gathering and preparation of the data—including complex feature engineering processes—to the presentation and improvement of results, with various algorithms to choose from in every step. Feature engineering in particular can be very beneficial for ML, leading to numerous improvements such as boosting the predictive results, decreasing computational times, reducing excessive noise, and increasing the transparency behind the decisions taken during the training. Despite that, while several visual analytics tools exist to monitor and control the different stages of the ML life cycle (especially those related to data and algorithms), feature engineering support remains inadequate. In this paper, we present FeatureEnVi, a visual analytics system specifically designed to assist with the feature engineering process. Our proposed system helps users to choose the most important feature, to transform the original features into powerful alternatives, and to experiment with different feature generation combinations. Additionally, data space slicing allows users to explore the impact of features on both local and global scales. FeatureEnVi utilizes multiple automatic feature selection techniques; furthermore, it visually guides users with statistical evidence about the influence of each feature (or subsets of features). The final outcome is the extraction of heavily engineered features, evaluated by multiple validation metrics. The usefulness and applicability of FeatureEnVi are demonstrated with two use cases and a case study. We also report feedback from interviews with two ML experts and a visualization researcher who assessed the effectiveness of our system.

  • 10.
    Chatzimparmpas, Angelos
    et al.
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Martins, Rafael Messias
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Kucher, Kostiantyn
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Kerren, Andreas
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 2, p. 1547-1557Article in journal (Refereed)
    Abstract [en]

    In machine learning (ML), ensemble methods—such as bagging, boosting, and stacking—are widely-established approaches that regularly achieve top-notch predictive performance. Stacking (also called "stacked generalization") is an ensemble method that combines heterogeneous base models, arranged in at least one layer, and then employs another metamodel to summarize the predictions of those models. Although it may be a highly-effective approach for increasing the predictive performance of ML, generating a stack of models from scratch can be a cumbersome trial-and-error process. This challenge stems from the enormous space of available solutions, with different sets of data instances and features that could be used for training, several algorithms to choose from, and instantiations of these algorithms using diverse parameters (i.e., models) that perform differently according to various metrics. In this work, we present a knowledge generation model, which supports ensemble learning with the use of visualization, and a visual analytics system for stacked generalization. Our system, StackGenVis, assists users in dynamically adapting performance metrics, managing data instances, selecting the most important features for a given data set, choosing a set of top-performant and diverse algorithms, and measuring the predictive performance. In consequence, our proposed tool helps users to decide between distinct models and to reduce the complexity of the resulting stack by removing overpromising and underperforming models. The applicability and effectiveness of StackGenVis are demonstrated with two use cases: a real-world healthcare data set and a collection of data related to sentiment/stance detection in texts. Finally, the tool has been evaluated through interviews with three ML experts.

  • 11.
    Chen, Min
    et al.
    University of Oxford, England.
    Ebert, David
    Purdue University, IN 47907 USA.
    Hauser, Helwig
    University of Bergen, Norway.
    Heer, Jeffrey
    University of Washington, WA 98195 USA.
    North, Chris
    Virginia Polytech Institute and State University, VA 24061 USA.
    Qu, Huamin
    Hong Kong University of Science and Technology, Peoples R China.
    Suien, Han-Wei
    Ohio State University, OH 43210 USA.
    Tory, Melanie
    University of Victoria, Canada.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    IEEE VISUAL ANALYTICS SCIENCE & TECHNOLOGY CONFERENCE, IEEE INFORMATION VISUALIZATION CONFERENCE, AND IEEE SCIENTIFIC VISUALIZATION CONFERENCE2014In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, no 12, p. XI-XIVArticle in journal (Other academic)
    Abstract [en]

    n/a

  • 12.
    Costa, Jonathas
    et al.
    NYU, NY 10003 USA.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Univ Utah, UT 84112 USA.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Hansen, Charles
    Univ Utah, UT 84112 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). Univ Utah, UT 84112 USA.
    Silva, Claudio
    NYU, NY 10003 USA.
    Interactive Visualization of Atmospheric Effects for Celestial Bodies2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 2, p. 785-795Article in journal (Refereed)
    Abstract [en]

    We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earths case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earths atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets.

    Download full text (pdf)
    fulltext
  • 13.
    Dai, Shaozhang
    et al.
    Monash University, Australia.
    Smiley, Jim
    Monash University, Australia.
    Dwyer, Tim
    Monash University, Australia.
    Ens, Barrett
    Monash University, Australia.
    Besançon, Lonni
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    RoboHapalytics: A Robot Assisted Haptic Controller for Immersive Analytics2023In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 29, no 1, p. 451-461Article in journal (Refereed)
    Abstract [en]

    Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typicalmid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of hapticcontrols to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range andresolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm withhand tracking to align tangible controls under the user’s fingers as they reach out to interact with data affordances. We begin witha study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtualmid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-airinteraction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction withimmersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactiveslicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data

  • 14.
    Domova, Veronika
    et al.
    Stanford University, CA 94305 USA.
    Vrotsou, Katerina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Model for Types and Levels of Automation in Visual Analytics: A Survey, a Taxonomy, and Examples2023In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 29, no 8, p. 3550-3568Article in journal (Refereed)
    Abstract [en]

    The continuous growth in availability and access to data presents a major challenge to the human analyst. As the manual analysis of large and complex datasets is nowadays practically impossible, the need for assisting tools that can automate the analysis process while keeping the human analyst in the loop is imperative. A large and growing body of literature recognizes the crucial role of automation in Visual Analytics and suggests that automation is among the most important constituents for effective Visual Analytics systems. Today, however, there is no appropriate taxonomy nor terminology for assessing the extent of automation in a Visual Analytics system. In this article, we aim to address this gap by introducing a model of levels of automation tailored for the Visual Analytics domain. The consistent terminology of the proposed taxonomy could provide a ground for users/readers/reviewers to describe and compare automation in Visual Analytics systems. Our taxonomy is grounded on a combination of several existing and well-established taxonomies of levels of automation in the human-machine interaction domain and relevant models within the visual analytics field. To exemplify the proposed taxonomy, we selected a set of existing systems from the event-sequence analytics domain and mapped the automation of their visual analytics process stages against the automation levels in our taxonomy.

  • 15.
    Duran Rosich, David
    et al.
    ViRVIG Group, UPC Barcelona, Barcelona, Spain.
    Hermosilla, Pedro
    Visual Computing Group, U. Ulm, Ulm, Germany.
    Ropinski, Timo
    Visual Computing Group, U. Ulm, Ulm, Germany.
    Kozlikova, Barbora
    Masaryk University, Brno, Czech Republic.
    Vinacua, Àlvar
    ViRVIG Group, UPC Barcelona, Barcelona, Spain.
    Vazquez, Pere-Pau
    ViRVIG Group, UPC Barcelona, Barcelona, Spain.
    Visualization of Large Molecular Trajectories2019In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 25, no 1, p. 987-996Article in journal (Refereed)
    Abstract [en]

    The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.

  • 16.
    Dwyer, Tim
    et al.
    Monash University, Australia.
    Elmqvist, Niklas
    University of Maryland, MD 20742 USA.
    Fisher, Brian
    Simon Fraser University, Canada.
    Franconeri, Steve
    Northwestern University, IL 60208 USA.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mike Kirby, Robert M.
    University of Utah, UT 84112 USA.
    Liu, Shixia
    Tsinghua University, Peoples R China.
    Schreck, Tobias
    Graz University of Technology, Austria.
    Yuan, Xiaoru
    Peking University, Peoples R China.
    Message from the VIS Paper Chairs and Guest Editors Preface2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 24, no 1, p. XI-XVArticle in journal (Other academic)
  • 17.
    Engel, Dominik
    et al.
    Ulm Univ, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Ulm Univ, Germany.
    Deep Volumetric Ambient Occlusion2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 2, p. 1268-1278Article in journal (Refereed)
    Abstract [en]

    We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAOs ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.

  • 18.
    Etiene, Tiago
    et al.
    University of Utah, UT 84112 USA .
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Scheidegger, Carlos
    ATandT Labs Research, NJ 07932 USA .
    Comba, Joao L. D.
    University of Federal Rio Grande do Sul, Brazil .
    Gustavo Nonato, Luis
    University of Sao Paulo, Brazil .
    Kirby, Robert M.
    University of Utah, UT 84112 USA .
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Silva, Claudio T.
    NYU, NY 11201 USA .
    Verifying Volume Rendering Using Discretization Error Analysis2014In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, no 1, p. 140-154Article in journal (Refereed)
    Abstract [en]

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.

    Download full text (pdf)
    fulltext
  • 19.
    Falk, Martin
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Swedish e-Science Research Centre (SeRC), Sweden.
    Tobiasson, Victor
    Science for Life Laboratory, Department of Biochemistry and Biophysics, Stockholm University, Solna, Sweden.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Swedish e-Science Research Centre (SeRC), Sweden.
    Hansen, Charles
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Kahlert School of Computing, University of Utah.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Swedish e-Science Research Centre (SeRC), Sweden.
    A Visual Environment for Data Driven Protein Modeling and Validation2024In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 30, no 8, p. 5063-5073Article in journal (Refereed)
    Abstract [en]

    In structural biology, validation and verification of new atomic models are crucial and necessary steps which limit the production of reliable molecular models for publications and databases. An atomic model is the result of meticulous modeling and matching and is evaluated using a variety of metrics that provide clues to improve and refine the model so it fits our understanding of molecules and physical constraints. In cryo electron microscopy (cryo-EM) the validation is also part of an iterative modeling process in which there is a need to judge the quality of the model during the creation phase. A shortcoming is that the process and results of the validation are rarely communicated using visual metaphors.

    This work presents a visual framework for molecular validation. The framework was developed in close collaboration with domain experts in a participatory design process. Its core is a novel visual representation based on 2D heatmaps that shows all available validation metrics in a linear fashion, presenting a global overview of the atomic model and provide domain experts with interactive analysis tools. Additional information stemming from the underlying data, such as a variety of local quality measures, is used to guide the user's attention toward regions of higher relevance. Linked with the heatmap is a three-dimensional molecular visualization providing the spatial context of the structures and chosen metrics. Additional views of statistical properties of the structure are included in the visual framework. We demonstrate the utility of the framework and its visual guidance with examples from cryo-EM.

    Download full text (pdf)
    fulltext
  • 20.
    Falk, Martin
    et al.
    Visualization Res. Center (VISUS), Univ. Stuttgart, Stuttgart.
    Weiskopf, Daniel
    Visualization Res. Center (VISUS), Univ. Stuttgart, Stuttgart.
    Output-Sensitive 3D Line Integral Convolution2008In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 14, no 4, p. 820-834Article in journal (Refereed)
    Abstract [en]

    We propose a largely output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is mainly independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MlPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance analysis is included.

  • 21.
    Falk, Martin
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Treanor, Darren
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Leeds Teaching Hospitals NHS Trust, United Kingdom.
    Lundström, Claes
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Sectra, Linköping, Sweden.
    Interactive Visualization of 3D Histopathology in Native Resolution2019In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 25, no 1, p. 1008-1017Article in journal (Refereed)
    Abstract [en]

    We present a visualization application that enables effective interactive visual analysis of large-scale 3D histopathology, that is, high-resolution 3D microscopy data of human tissue. Clinical work flows and research based on pathology have, until now, largely been dominated by 2D imaging. As we will show in the paper, studying volumetric histology data will open up novel and useful opportunities for both research and clinical practice. Our starting point is the current lack of appropriate visualization tools in histopathology, which has been a limiting factor in the uptake of digital pathology. Visualization of 3D histology data does pose difficult challenges in several aspects. The full-color datasets are dense and large in scale, on the order of 100,000 x 100,000 x 100 voxels. This entails serious demands on both rendering performance and user experience design. Despite this, our developed application supports interactive study of 3D histology datasets at native resolution. Our application is based on tailoring and tuning of existing methods, system integration work, as well as a careful study of domain specific demands emanating from a close participatory design process with domain experts as team members. Results from a user evaluation employing the tool demonstrate a strong agreement among the 14 participating pathologists that 3D histopathology will be a valuable and enabling tool for their work.

    Download full text (pdf)
    fulltext
  • 22.
    Feng, Louis
    et al.
    University of California, Davis.
    Hotz, Ingrid
    Zuse Institue Berlin.
    Hamann, Bernd
    University of California, Davis, USA.
    Joy, Ken
    University of California, Davis, USA.
    Anisotropic Noise Samples2008In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 14, no 2, p. 342-354Article in journal (Refereed)
    Abstract [en]

    We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as non-overlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, e.g., line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation. To generate these samples with the desired properties we construct a first set of non-overlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach which combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum.

  • 23.
    Fernstad, Sara Johansson
    et al.
    Newcastle Univ, England.
    Johansson Westberg, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    To Explore What Isnt There-Glyph-Based Visualization for Analysis of Missing Values2022In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, no 10, p. 3513-3529Article in journal (Refereed)
    Abstract [en]

    This article contributes a novel visualization method, Missingness Glyph, for analysis and exploration of missing values in data. Missing values are a common challenge in most data generating domains and may cause a range of analysis issues. Missingness in data may indicate potential problems in data collection and pre-processing, or highlight important data characteristics. While the development and improvement of statistical methods for dealing with missing data is a research area in its own right, mainly focussing on replacing missing values with estimated values, considerably less focus has been put on visualization of missing values. Nonetheless, visualization and explorative analysis has great potential to support understanding of missingness in data, and to enable gaining of novel insights into patterns of missingness in a way that statistical methods are unable to. The Missingness Glyph supports identification of relevant missingness patterns in data, and is evaluated and compared to two other visualization methods in context of the missingness patterns. The results are promising and confirms that the Missingness Glyph in several cases perform better than the alternative visualization methods.

  • 24.
    Feyer, Stefan P.
    et al.
    University of Konstanz, Germany.
    Pinaud, Bruno
    Unicersity of Bordeaux, France.
    Kobourov, Stephen
    University of Arizona, USA.
    Brich, Nicolas
    University of Tübingen, Germany.
    Krone, Michael
    University of Tübingen, Germany and New York University, USA.
    Kerren, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Department of Computer Science and Media Technology, Linnaeus University, Sweden.
    Behrisch, Michael
    Utrecht University, Netherlands.
    Schreiber, Falk
    University of Konstanz, Germany and Monash University, Australia.
    Klein, Karsten
    University of Konstanz, Germany.
    2D, 2.5D, or 3D? An Exploratory Study on Multilayer Network Visualisations in Virtual Reality2024In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 30, no 1, p. 469-479Article in journal (Refereed)
    Abstract [en]

    Relational information between different types of entities is often modelled by a multilayer network (MLN) - a network with subnetworks represented by layers. The layers of an MLN can be arranged in different ways in a visual representation, however, the impact of the arrangement on the readability of the network is an open question. Therefore, we studied this impact for several commonly occurring tasks related to MLN analysis. Additionally, layer arrangements with a dimensionality beyond 2D, which are common in this scenario, motivate the use of stereoscopic displays. We ran a human subject study utilising a Virtual Reality headset to evaluate 2D, 2.5D, and 3D layer arrangements. The study employs six analysis tasks that cover the spectrum of an MLN task taxonomy, from path finding and pattern identification to comparisons between and across layers. We found no clear overall winner. However, we explore the task-to-arrangement space and derive empirical-based recommendations on the effective use of 2D, 2.5D, and 3D layer arrangements for MLNs.

  • 25.
    Heiberg, Einar
    et al.
    Linköping University, Department of Medicine and Care, Clinical Physiology. Linköping University, Faculty of Health Sciences.
    Ebbers, Tino
    Linköping University, Department of Medicine and Care, Clinical Physiology. Linköping University, Faculty of Health Sciences.
    Wigström, Lars
    Linköping University, Department of Medicine and Care, Clinical Physiology. Linköping University, Faculty of Health Sciences.
    Karlsson, Matts
    Linköping University, Department of Biomedical Engineering, Physiological Measurements. Linköping University, The Institute of Technology.
    Three-dimensional flow characterization using vector pattern matching2003In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 9, no 3, p. 313-319Article in journal (Refereed)
    Abstract [en]

    This paper describes a novel method for regional characterization of three-dimensional vector fields using a pattern matching approach. Given a three-dimensional vector field, the goal is to automatically locate, identify, and visualize a selected set of classes of structures or features. Rather than analytically defining the properties that must be fulfilled in a region in order to be classified as a specific structure, a set of idealized patterns for each structure type is constructed. Similarity to these patterns is then defined and calculated. Examples of structures of interest include vortices, swirling flow, diverging or converging flow, and parallel flow. Both medical and aerodynamic applications are presented in this paper.

  • 26.
    Helske, Jouni
    et al.
    Univ Jyvaskyla, Finland.
    Helske, Satu
    Univ Turku, Finland.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Besancon, Lonni
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Can Visualization Alleviate Dichotomous Thinking?: Effects of Visual Representations on the Cliff Effect2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 8, p. 3397-3409Article in journal (Refereed)
    Abstract [en]

    Common reporting styles for statistical results in scientific articles, such as p-values and confidence intervals (CI), have been reported to be prone to dichotomous interpretations, especially with respect to the null hypothesis significance testing framework. For example when the p-value is small enough or the CIs of the mean effects of a studied drug and a placebo are not overlapping, scientists tend to claim significant differences while often disregarding the magnitudes and absolute differences in the effect sizes. This type of reasoning has been shown to be potentially harmful to science. Techniques relying on the visual estimation of the strength of evidence have been recommended to reduce such dichotomous interpretations but their effectiveness has also been challenged. We ran two experiments on researchers with expertise in statistical analysis to compare several alternative representations of confidence intervals and used Bayesian multilevel models to estimate the effects of the representation styles on differences in researchers subjective confidence in the results. We also asked the respondents opinions and preferences in representation styles. Our results suggest that adding visual information to classic CI representation can decrease the tendency towards dichotomous interpretations - measured as the cliff effect: the sudden drop in confidence around p-value 0.05 - compared with classic CI visualization and textual representation of the CI with p-values. All data and analyses are publicly available at https://github.com/helske/statvis.

  • 27.
    Hernell, Frida
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ljung, Patric
    Siemens Corporation Research.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Local Ambient Occlusion in Direct Volume Rendering2010In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 16, no 4, p. 548-559Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel technique to efficiently compute illumination for Direct Volume Rendering using a local approximation of ambient occlusion to integrate the intensity of incident light for each voxel. An advantage with this local approach is that fully shadowed regions are avoided, a desirable feature in many applications of volume rendering such as medical visualization. Additional transfer function interactions are also presented, for instance, to highlight specific structures with luminous tissue effects and create an improved context for semitransparent tissues with a separate absorption control for the illumination settings. Multiresolution volume management and GPU-based computation are used to accelerate the calculations and support large data sets. The scheme yields interactive frame rates with an adaptive sampling approach for incrementally refined illumination under arbitrary transfer function changes. The illumination effects can give a better understanding of the shape and density of tissues and so has the potential to increase the diagnostic value of medical volume rendering. Since the proposed method is gradient-free, it is especially beneficial at the borders of clip planes, where gradients are undefined, and for noisy data sets.

    Download full text (pdf)
    FULLTEXT01
  • 28.
    Jankowai, Jochen
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Feature Level-Sets: Generalizing Iso-surfaces to Multi-variate Data2020In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 26, no 2, p. 1308-1319Article in journal (Refereed)
    Abstract [en]

    Iso-surfaces or level-sets provide an effective and frequently used means for feature visualization. However, they are restricted to simple features for uni-variate data. The approach does not scale when moving to multi-variate data or when considering more complex feature definitions. In this paper, we introduce the concept of traits and feature level-sets, which can be understood as a generalization of level-sets as it includes iso-surfaces, and fiber surfaces as special cases. The concept is applicable to a large class of traits defined as subsets in attribute space, which can be arbitrary combinations of points, lines, surfaces and volumes.  It is implemented into a system that provides an interface to define traits in an interactive way and multiple rendering options. We demonstrate the effectiveness of the approach using multi-variate data sets of different nature, including vector and tensor data, from different application domains.

    Download full text (pdf)
    fulltext
  • 29.
    Johansson, Jimmy
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Forsell, Camilla
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Evaluation of Parallel Coordinates: Overview, Categorization and Guidelines for Future Research2016In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 22, no 1, p. 579-588Article in journal (Refereed)
    Abstract [en]

    The parallel coordinates technique is widely used for the analysis of multivariate data. During recent decades significant research efforts have been devoted to exploring the applicability of the technique and to expand upon it. resulting in a variety of extensions. Of these many research activities, a surprisingly small number concerns user-centred evaluations investigating actual use and usability issues for different tasks, data and domains. The result is a clear lack of convincing evidence to support and guide uptake by users as well as future research directions. To address these issues this paper contributes a thorough literature survey of what has been done in the area of user-centred evaluation of parallel coordinates. These evaluations are divided into four categories based on characterization of use, derived from the survey. Based on the data from the survey and the categorization combined with the authors experience of working with parallel coordinates, a set of guidelines for future research directions is proposed.

  • 30.
    Johansson, Sara
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Johansson, Jimmy
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics2009In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 15, no 6, p. 993-1000Article in journal (Refereed)
    Abstract [en]

    Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.

  • 31.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Anders, Ynnerman
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data2017In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 23, no 1, p. 901-910Article in journal (Refereed)
    Abstract [en]

    We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

    Download full text (pdf)
    fulltext
  • 32.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Falk, Martin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Intuitive Exploration of Volumetric Data Using Dynamic Galleries2016In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 22, no 1, p. 896-905Article in journal (Refereed)
    Abstract [en]

    In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.

    Download full text (pdf)
    fulltext
  • 33.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2364-2371Article in journal (Refereed)
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

  • 34.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Schön, Thomas
    Uppsala university, Sweden.
    Wrenninge, Magnus
    Department of Science and Technology, Pixar Animation Studios, 512174 Emeryville, California, United States.
    Direct Transmittance Estimation in Heterogeneous Participating Media Using Approximated Taylor Expansions2022In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, no 7, p. 2602-2614Article in journal (Refereed)
    Abstract [en]

    Evaluating the transmittance between two points along a ray is a key component in solving the light transport through heterogeneous participating media and entails computing an intractable exponential of the integrated medium's extinction coefficient. While algorithms for estimating this transmittance exist, there is a lack of theoretical knowledge about their behaviour, which also prevent new theoretically sound algorithms from being developed. For this purpose, we introduce a new class of unbiased transmittance estimators based on random sampling or truncation of a Taylor expansion of the exponential function. In contrast to classical tracking algorithms, these estimators are non-analogous to the physical light transport process and directly sample the underlying extinction function without performing incremental advancement. We present several versions of the new class of estimators, based on either importance sampling or Russian roulette to provide finite unbiased estimators of the infinite Taylor series expansion. We also show that the well known ratio tracking algorithm can be seen as a special case of the new class of estimators. Lastly, we conduct performance evaluations on both the central processing unit (CPU) and the graphics processing unit (GPU), and the results demonstrate that the new algorithms outperform traditional algorithms for heterogeneous mediums.

    Download full text (pdf)
    fulltext
  • 35.
    Kasten, Jens
    et al.
    Zuse Institute Berlin, Germany.
    Reininghaus, Jan
    Zuse Institute Berlin, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Hege, Hans-Christian
    Zuse Institute Berlin, Germany.
    Two-dimensional Time-dependent Vortex Regions based on the Acceleration Magnitude2011In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 17, no 12, p. 2080-2087Article in journal (Refereed)
    Abstract [en]

    Acceleration is a fundamental quantity of flow fields that captures Galilean invariant properties of particle motion. Considering the magnitude of this field, minima represent characteristic structures of the flow that can be classified as saddle- or vortex-like. We made the interesting observation that vortex-like minima are enclosed by particularly pronounced ridges. This makes it possible to define boundaries of vortex regions in a parameter-free way. Utilizing scalar field topology, a robust algorithm can be designed to extract such boundaries. They can be arbitrarily shaped. An efficient tracking algorithm allows us to display the temporal evolution of vortices. Various vortex models are used to evaluate the method. We apply our method to two-dimensional model systems from computational fluid dynamics and compare the results to those arising from existing definitions.

  • 36.
    Kratz, Andrea
    et al.
    Zuse Institute Berlin, Germany.
    Baum, Daniel
    Zuse Institute Berlin, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Anisotropic Sampling of Planar and Two-Manifold Domains for Texture Generation and Glyph Distribution2013In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 19, no 11, p. 1782-1794Article in journal (Refereed)
    Abstract [en]

    We present a new method for the generation of anisotropic sample distributions on planar and two-manifold domains. Most previous work that is concerned with aperiodic point distributions is designed for isotropically shaped samples. Methods focusing on anisotropic sample distributions are rare, and either they are restricted to planar domains, are highly sensitive to the choice of parameters, or they are computationally expensive. In this paper, we present a time-efficient approach for the generation of anisotropic sample distributions that only depends on intuitive design parameters for planar and two-manifold domains. We employ an anisotropic triangulation that serves as basis for the creation of an initial sample distribution as well as for a gravitational-centered relaxation. Furthermore, we present an approach for interactive rendering of anisotropic Voronoi cells as base element for texture generation. It represents a novel and flexible visualization approach to depict metric tensor fields that can be derived from general tensor fields as well as scalar or vector fields.

  • 37.
    Kreiser, Julian
    et al.
    Visual Computing Group, Ulm University, Ulm, Germany.
    Hann, Alexander
    Department of Internal Medicine I, Ulm University, Ulm, Germany.
    Zizer, Eugen
    Department of Internal Medicine I, Ulm University, Ulm, Germany.
    Ropinski, Timo
    Visual Computing Group, Ulm University, Ulm, Germany.
    Decision Graph Embedding for High-Resolution Manometry Diagnosis2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE SciVis 2017), Vol. 24, no 1, p. 873-882Article in journal (Refereed)
    Abstract [en]

    High-resolution manometry is an imaging modality which enables the categorization of esophageal motility disorders. Spatio-temporal pressure data along the esophagus is acquired using a tubular device and multiple test swallows are performed by the patient. Current approaches visualize these swallows as individual instances, despite the fact that aggregated metrics are relevant in the diagnostic process. Based on the current Chicago Classification, which serves as the gold standard in this area, we introduce a visualization supporting an efficient and correct diagnosis. To reach this goal, we propose a novel decision graph representing the Chicago Classification with workflow optimization in mind. Based on this graph, we are further able to prioritize the different metrics used during diagnosis and can exploit this prioritization in the actual data visualization. Thus, different disorders and their related parameters are directly represented and intuitively influence the appearance of our visualization. Within this paper, we introduce our novel visualization, justify the design decisions, and provide the results of a user study we performed with medical students as well as a domain expert. On top of the presented visualization, we further discuss how to derive a visual signature for individual patients that allows us for the first time to perform an intuitive comparison between subjects, in the form of small multiples.

  • 38.
    Kreiser, Julian
    et al.
    Ulm Univ, Germany.
    Hermosilla, Pedro
    Ulm Univ, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Ulm Univ, Germany.
    Void Space Surfaces to Convey Depth in Vessel Visualizations2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 10, p. 3913-3925Article in journal (Refereed)
    Abstract [en]

    To enhance depth perception and thus data comprehension, additional depth cues are often used in 3D visualizations of complex vascular structures. There is a variety of different approaches described in the literature, ranging from chromadepth color coding over depth of field to glyph-based encodings. Unfortunately, the majority of existing approaches suffers from the same problem: As these cues are directly applied to the geometrys surface, the display of additional information on the vessel wall, such as other modalities or derived attributes, is impaired. To overcome this limitation we propose Void Space Surfaces which utilizes empty space in between vessel branches to communicate depth and their relative positioning. This allows us to enhance the depth perception of vascular structures without interfering with the spatial data and potentially superimposed parameter information. With this article, we introduce Void Space Surfaces, describe their technical realization, and show their application to various vessel trees. Moreover, we report the outcome of two user studies which we have conducted in order to evaluate the perceptual impact of Void Space Surfaces compared to existing vessel visualization techniques and discuss expert feedback.

  • 39.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ljung, Patric
    Siemens.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 3, p. 447-462Article in journal (Refereed)
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

    Download full text (pdf)
    FULLTEXT02
  • 40.
    Lin, Haihan
    et al.
    University of Utah, USA.
    Akbaba, Derya
    University of Utah, USA.
    Meyer, Miriah
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lex, Alexander
    University of Utah, USA.
    Data Hunches: Incorporating Personal Knowledge into Visualizations2023In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 29, no 1, p. 504-514Article in journal (Refereed)
    Abstract [en]

    The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches . We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.

    Download full text (pdf)
    fulltext
  • 41.
    Lin, Ming C.
    et al.
    Hong Kong University of Science and Technology, Peoples R China .
    Hu, Shi-Min
    Hong Kong University of Science and Technology, Peoples R China .
    Qu, Huamin
    Hong Kong University of Science and Technology, Peoples R China .
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV). University of Oxford, England .
    Editorial Material: Untitled2013In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 19, no 10, p. 1605-1605Article in journal (Other academic)
    Abstract [en]

    n/a

  • 42.
    Lindemann, Florian
    et al.
    University of Munster.
    Ropinski, Timo
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    About the Influence of Illumination Models on Image Comprehension in Direct Volume Rendering2011In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 17, no 12, p. 1922-1931Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings.

  • 43.
    Lindholm, Stefan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Hansen, Charles
    School of Computing, University of Utah, USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, The Institute of Technology.
    Boundary Aware Reconstruction of Scalar Fields2014In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, no 12, p. 2447-2455Article in journal (Refereed)
    Abstract [en]

    In visualization, the combined role of data reconstruction and its classification plays a crucial role. In this paper we propose a novel approach that improves classification of different materials and their boundaries by combining information from the classifiers at the reconstruction stage. Our approach estimates the targeted materials’ local support before performing multiple material-specific reconstructions that prevent much of the misclassification traditionally associated with transitional regions and transfer function (TF) design. With respect to previously published methods our approach offers a number of improvements and advantages. For one, it does not rely on TFs acting on derivative expressions, therefore it is less sensitive to noisy data and the classification of a single material does not depend on specialized TF widgets or specifying regions in a multidimensional TF. Additionally, improved classification is attained without increasing TF dimensionality, which promotes scalability to multivariate data. These aspects are also key in maintaining low interaction complexity. The results are simple-to-achieve visualizations that better comply with the user’s understanding of discrete features within the studied object.

  • 44.
    Ljung, Patric
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Winskog, Calle
    Persson, Anders
    Linköping University, Department of Medical and Health Sciences, Radiology. Linköping University, Faculty of Health Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Lundström, Claes
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Full Body Virtual Autopsies Using A State-of-the-art Volume Rendering Pipeline2006In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 12, no 5, p. 869-876Article in journal (Other academic)
    Abstract [en]

    This paper presents a procedure for virtual autopsies based on interactive 3D visualizations of large scale, high resolutiondata from CT-scans of human cadavers. The procedure is described using examples from forensic medicine and the added valueand future potential of virtual autopsies is shown from a medical and forensic perspective. Based on the technical demands ofthe procedure state-of-the-art volume rendering techniques are applied and refined to enable real-time, full body virtual autopsiesinvolving gigabyte sized data on standard GPUs. The techniques applied include transfer function based data reduction using levelof-detail selection and multi-resolution rendering techniques. The paper also describes a data management component for large,out-of-core data sets and an extension to the GPU-based raycaster for efficient dual TF rendering. Detailed benchmarks of thepipeline are presented using data sets from forensic cases.

  • 45.
    Lu, Hsiao-Ying
    et al.
    University of California, Davis, USA.
    Fujiwara, Takanori
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Chang, Ming-Yi
    Fu Jen Catholic University, Taiwan.
    Fu, Yang-chih
    Institute of Sociology, Academia Sinica, Taiwan.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ma, Kwan-Liu
    University of California, Davis, USA.
    Visual Analytics of Multivariate Networks with Representation Learning and Composite Variable Construction2024In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, p. 1-16Article in journal (Refereed)
    Abstract [en]

    Multivariate networks are commonly found in realworld data-driven applications. Uncovering and understanding the relations of interest in multivariate networks is not a trivial task. This paper presents a visual analytics workflow for studying multivariate networks to extract associations between different structural and semantic characteristics of the networks (e.g., what are the combinations of attributes largely relating to the density of a social network?). The workflow consists of a neuralnetwork- based learning phase to classify the data based on the chosen input and output attributes, a dimensionality reduction and optimization phase to produce a simplified set of results for examination, and finally an interpreting phase conducted by the user through an interactive visualization interface. A key part of our design is a composite variable construction step that remodels nonlinear features obtained by neural networks into linear features that are intuitive to interpret. We demonstrate the capabilities of this workflow with multiple case studies on networks derived from social media usage and also evaluate the workflow with qualitative feedback from experts.

  • 46.
    Lundin Palmerius, Karljohan
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Haptic Rendering of Dynamic Volumetric Data2008In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 14, no 2, p. 263-276Article in journal (Refereed)
    Abstract [en]

    With current methods for volume haptics in scientific visualization, features in time-varying data can freely move straight through the haptic probe without generating any haptic feedback-the algorithms are simply not designed to handle variation with time but consider only the instantaneous configuration when the haptic feedback is calculated. This article introduces haptic rendering of dynamic volumetric data to provide a means for haptic exploration of dynamic behavior in volumetric data. We show how haptic feedback can be produced that is consistent with volumetric data moving within the virtual environment and with data that, in itself, evolves over time. Haptic interaction with time-varying data is demonstrated by allowing palpation of a computerized tomography sequence of a beating human heart.

    Download full text (pdf)
    fulltext
  • 47.
    Lundström, Claes
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Local histograms for design of Transfer Functions in Direct Volume Rendering2006In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 12, no 6, p. 1570-1579Article in journal (Other academic)
    Abstract [en]

    Direct Volume Rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current Transfer Function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce Partial Range Histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF.

  • 48.
    Lundström, Claes
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ljung, Patric
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Persson, Anders
    Linköping University, Department of Medical and Health Sciences, Radiology. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Centre for Medical Imaging, Department of Radiology in Linköping. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Uncertainty Visualization in Medical Volume Rendering Using Probabilistic Animation2007In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 13, no 6, p. 1648-1655Article in journal (Refereed)
    Abstract [en]

    Direct volume rendering has proved to be an effective visualization method for medical data sets and has reached wide-spread clinical use. The diagnostic exploration, in essence, corresponds to a tissue classification task, which is often complex and time-consuming. Moreover, a major problem is the lack of information on the uncertainty of the classification, which can have dramatic consequences for the diagnosis. In this paper this problem is addressed by proposing animation methods to convey uncertainty in the rendering. The foundation is a probabilistic Transfer Function model which allows for direct user interaction with the classification. The rendering is animated by sampling the probability domain over time, which results in varying appearance for uncertain regions. A particularly promising application of this technique is a "sensitivity lens" applied to focus regions in the data set. The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy.

  • 49.
    Lundström, Claes
    et al.
    Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Rydell, Thomas
    Interact Institute, Norrköping.
    Forsell, Camilla
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Persson, Anders
    Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, Department of Medical and Health Sciences, Radiology. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Centre for Diagnostics, Department of Radiology in Linköping.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Multi-Touch Table System for Medical Visualization: Application to Orthopedic Surgery Planning2011In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 17, no 12, p. 1775-1784Article in journal (Refereed)
    Abstract [en]

    Medical imaging plays a central role in a vast range of healthcare practices. The usefulness of 3D visualizations has been demonstrated for many types of treatment planning. Nevertheless, full access to 3D renderings outside of the radiology department is still scarce even for many image-centric specialties. Our work stems from the hypothesis that this under-utilization is partly due to existing visualization systems not taking the prerequisites of this application domain fully into account. We have developed a medical visualization table intended to better fit the clinical reality. The overall design goals were two-fold: similarity to a real physical situation and a very low learning threshold. This paper describes the development of the visualization table with focus on key design decisions. The developed features include two novel interaction components for touch tables. A user study including five orthopedic surgeons demonstrates that the system is appropriate and useful for this application domain.

  • 50.
    Läthén, Gunnar
    et al.
    Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Lindholm, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, The Institute of Technology.
    Persson, Anders
    Linköping University, Department of Medical and Health Sciences, Radiology. Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, Faculty of Health Sciences.
    Borga, Magnus
    Linköping University, Center for Medical Image Science and Visualization, CMIV. Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Automatic Tuning of Spatially Varying Transfer Functions for Blood Vessel Visualization2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2345-2354Article in journal (Refereed)
    Abstract [en]

    Computed Tomography Angiography (CTA) is commonly used in clinical routine for diagnosing vascular diseases. The procedure involves the injection of a contrast agent into the blood stream to increase the contrast between the blood vessels and the surrounding tissue in the image data. CTA is often visualized with Direct Volume Rendering (DVR) where the enhanced image contrast is important for the construction of Transfer Functions (TFs). For increased efficiency, clinical routine heavily relies on preset TFs to simplify the creation of such visualizations for a physician. In practice, however, TF presets often do not yield optimal images due to variations in mixture concentration of contrast agent in the blood stream. In this paper we propose an automatic, optimization- based method that shifts TF presets to account for general deviations and local variations of the intensity of contrast enhanced blood vessels. Some of the advantages of this method are the following. It computationally automates large parts of a process that is currently performed manually. It performs the TF shift locally and can thus optimize larger portions of the image than is possible with manual interaction. The method is based on a well known vesselness descriptor in the definition of the optimization criterion. The performance of the method is illustrated by clinically relevant CT angiography datasets displaying both improved structural overviews of vessel trees and improved adaption to local variations of contrast concentration. 

    Download full text (pdf)
    fulltext
12 1 - 50 of 78
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf