liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 63
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Birkeland, A.
    et al.
    University of of Bergen Bergen, Norway.
    Solteszova, V.
    University of of Bergen Bergen, Norway; Christian Michelsen ResearchBergen, Norway.
    Honigmann, D.
    N22 Research and Technology TransferWiener Neustadt, Austria.
    Gilja, O.H.
    Haukeland University Hospital Bergen, Norway.
    Brekke, S.
    University of of Bergen Bergen, Norway; Archer—The Well CompanyBergen, Norway.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Viola, I.
    University of of Bergen Bergen, Norway; Vienna University of of TechnologyVienna, Austria.
    The ultrasound visualization pipeline2014In: Mathematics and Visualization, ISSN 1612-3786, Vol. 37, p. 283-303Article in journal (Refereed)
    Abstract [en]

    Radiology is one of themain tools inmodernmedicine.Anumerous set of deceases, ailments and treatments utilize accurate images of the patient. Ultrasound is one of the most frequently used imaging modality in medicine. The high spatial resolution, its interactive nature and non-invasivenessmakes it the first choice inmany examinations. Image interpretation is one of ultrasound’s main challenges. Much training is required to obtain a confident skill level in ultrasound-based diagnostics. State-of-the-art graphics techniques is needed to providemeaningful visualizations of ultrasound in real-time. In this paper we present the process-pipeline for ultrasound visualization, including an overview of the tasks performed in the specific steps. To provide an insight into the trends of ultrasound visualization research, we have selected a set of significant publications and divided them into a technique-based taxonomy covering the topics pre-processing, segmentation, registration, rendering and augmented reality. For the different technique types we discuss the difference between ultrasound-based techniques and techniques for other modalities.

  • 2.
    Birkeland, Åsmund
    et al.
    Department of Information, University of Bergen, Norway.
    Solteszova, Veronika
    Hönigmann, Dieter
    Helge Gilja, Odd
    Brekke, Svein
    Ropinski, Timo
    Viola, Ivan
    The Ultrasound Visualization Pipeline - A Survey2012Other (Other academic)
    Abstract [en]

    Ultrasound is one of the most frequently used imaging modality in medicine. The high spatial resolution, its interactive nature and non-invasiveness makes it the first choice in many examinations. Image interpretation is one of ultrasound's main challenges. Much training is required to obtain a confident skill level in ultrasound-based diagnostics. State-of-the-art graphics techniques is needed to provide meaningful visualizations of ultrasound in real-time. In this paper we present the process-pipeline for ultrasound visualization, including an overview of the tasks performed in the specific steps. To provide an insight into the trends of ultrasound visualization research, we have selected a set of significant publications and divided them into a technique-based taxonomy covering the topics pre-processing, segmentation, registration, rendering and augmented reality. For the different technique types we discuss the difference between ultrasound-based techniques and techniques for other modalities.

  • 3.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kleiner, A.
    IRobotPasadena, CA, United States.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    An interactive visualization system for urban search & rescue mission planning2014In: 12th IEEE International Symposium on Safety, Security and Rescue Robotics, SSRR 2014 - Symposium Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2014, no 7017652Conference paper (Refereed)
    Abstract [en]

    We present a visualization system for incident commanders in urban search and rescue scenarios that supports the inspection and access path planning in post-disaster structures. Utilizing point cloud data acquired from unmanned robots, the system allows for assessment of automatically generated paths, whose computation is based on varying risk factors, in an interactive 3D environment increasing immersion. The incident commander interactively annotates and reevaluates the acquired point cloud based on live feedback. We describe design considerations, technical realization, and discuss the results of an expert evaluation that we conducted to assess our system.

  • 4.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kleiner, Alexander
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, The Institute of Technology.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Supporting Urban Search & Rescue Mission Planning through Visualization-Based Analysis2014In: Proceedings of the Vision, Modeling, and Visualization Conference 2014, Eurographics - European Association for Computer Graphics, 2014Conference paper (Refereed)
    Abstract [en]

    We propose a visualization system for incident commanders in urban search~\&~rescue scenarios that supports access path planning for post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present a set of viable access paths, based on varying risk factors, in a 3D environment combined with the visual analysis tools enabling informed decisions and trade-offs. Based on these decisions, a responder is guided along the path by the incident commander, who can interactively annotate and reevaluate the acquired point cloud to react to the dynamics of the situation. We describe design considerations for our system, technical realizations, and discuss the results of an expert evaluation.

  • 5.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Lang, Norbert
    St. Barbara Hospital, Hamm, Germany.
    Evangelista, Gianpaolo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Lehrke, Ralph
    St. Barbara Hospital, Hamm, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Guiding Deep Brain Stimulation Interventions by Fusing Multimodal Uncertainty Regions2013Conference paper (Other academic)
    Abstract [en]

    Deep Brain Stimulation (DBS) is a surgical intervention that is known to reduce or eliminate the symptoms of common movement disorders, such as Parkinson.s disease, dystonia, or tremor. During the intervention the surgeon places electrodes inside of the patient.s brain to stimulate speci.c regions. Since these regions span only a couple of millimeters, and electrode misplacement has severe consequences, reliable and accurate navigation is of great importance. Usually the surgeon relies on fused CT and MRI data sets, as well as direct feedback from the patient. More recently Microelectrode Recordings (MER), which support navigation by measuring the electric .eld of the patient.s brain, are also used. We propose a visualization system that fuses the different modalities: imaging data, MER and patient checks, as well as the related uncertainties, in an intuitive way to present placement-related information in a consistent view with the goal of supporting the surgeon in the .nal placement of the stimulating electrode. We will describe the design considerations for our system, the technical realization, present the outcome of the proposed system, and provide an evaluation.

  • 6.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lang, Norbert
    St. Barbara Hospital, Hamm, Germany.
    Evangelista, Gianpaolo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lehrke, Ralph
    St. Barbara Hospital, Hamm, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Supporting Deep Brain Stimulation Interventions by Fusing Microelectrode Recordings with Imaging Data2012Conference paper (Refereed)
  • 7.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mays, M. Leila
    NASA Goddard Space Flight Center, Greenbelt, MD, USA.
    Rastaetter, Lutz
    NASA Goddard Space Flight Center, Greenbelt, MD, USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    VCMass: A Framework for Verification of Coronal Mass Ejection Ensemble Simulations2014Conference paper (Refereed)
    Abstract [en]

    Supporting the growing field of space weather forecasting, we propose a framework to analyze ensemble simulations of coronal mass ejections. As the current simulation technique requires manual input, uncertainty is introduced into the simulation pipeline which leads to inaccurate predictions. Using our system, the analyst can compare ensemble members against ground truth data (arrival time and geo-effectivity) as well as information derived from satellite imagery. The simulations can be compared on a global basis, based on time-resolved quality measures, and as a 3D volumetric rendering with embedded satellite imagery in a multi-view setup. This flexible framework provides the expert with the tools to increase the knowledge about the, as of yet not fully understood, principles behind the formation of coronal mass ejections.

  • 8.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Pembroke, Asher
    NASA Goddard Space Flight Center, USA.
    Mays, M. Leila
    NASA Goddard Space Flight Center, USA.
    Rastaetter, Lutz
    NASA Goddard Space Flight Center, USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ropinski, Timo
    Ulm University, Germany.
    Visual Verification of Space Weather Ensemble Simulations2015In: 2015 IEEE Scientific Visualization Conference (SciVis), IEEE, 2015, p. 17-24Conference paper (Refereed)
    Abstract [en]

    We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel

  • 9.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Liu, Bingchen
    University of Auckland, New Zealand .
    Wuensche, Burkhard
    University of Auckland, New Zealand .
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Coherency-Based Curve Compression for High-Order Finite Element Model Visualization2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2315-2324Article in journal (Refereed)
    Abstract [en]

    Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.

  • 10.
    Bruckner, Stefan
    et al.
    Department of Informatics, University of Bergen, Bergen, Norway.
    Isenberg, Tobias
    AVIZ, INRIA, Saclay, France.
    Ropinski, Timo
    Institute of Media Informatics / Visual Computing Research Group, Ulm University, Ulm, Germany.
    Wiebel, Alexander
    Department of Computer Science, Hochschule Worms, 52788 Worms, Germany.
    A Model of Spatial Directness in Interactive Visualization2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506Article in journal (Refereed)
    Abstract [en]

    We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.

  • 11.
    Diepenbrock, Stefan
    et al.
    University of Münster.
    Praßni, Jörg-Stefan
    University of Münster.
    Lindemann, Florian
    University of Münster.
    Bothe, Hans-Werner
    University Hospital Münster.
    Ropinski, Timo
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    2010 IEEE Visualization Contest Winner: Interactive Planning for Brain Tumor Resections2011In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756, Vol. 31, no 5, p. 6-13Article in journal (Other academic)
    Abstract [en]

    n/a

  • 12.
    Diepenbrock, Stefan
    et al.
    University of Münster, Germany.
    Praßni, Jörg-Stefan
    University of Münster, Germany.
    Lindemann, Florian
    University of Münster, Germany.
    Bothe, Hans-Werner
    University Hospital Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Interactive Visualization Techniques for Neurosurgery Planning2011Conference paper (Other academic)
    Abstract [en]

    We present concepts for pre-operative planning of brain tumor resections. The proposed system uses a combination of traditional and novel visualization techniques rendered in real-time on modern GPUs in order to support neurosurgeons during intervention planning. A set of multimodal 2D and 3D renderings conveys the relation between the lesion and the various structures at risk and also depicts data uncertainty. To facilitate efficient interactions while providing a comprehensible visualization, all employed views are linked. Furthermore, the system allows the surgeon to interactively define the access path by clicking in the 3D views as well as to perform distance measurements in 2D and 3D.

  • 13.
    Diepenbrock, Stefan
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    From Imprecise User Input to Precise Vessel Segmentations2012In: Eurographics Workshop on Visual Computing for Biology and Medicine, VCBM 2012, The Eurographics Association , 2012, p. 65-72Conference paper (Refereed)
    Abstract [en]

    Vessel segmentation is an important prerequisite for many medical applications. While automatic vessel segmentation is an active field of research, interaction and visualization techniques for semi-automatic solutions have gotten far less attention. Nevertheless, since automatic techniques do not generally achieve perfect results, interaction is necessary. Especially for tasks that require an in-detail inspection or analysis of the shape of vascular structures precise segmentations are essential. However, in many cases these can only be generated by incorporating expert knowledge. In this paper we propose a visual vessel segmentation system that allows the user to interactively generate vessel segmentations. Therefore, we employ multiple linked views which allow to assess different aspects of the segmentation and depict its different quality metrics. Based on these quality metrics, the user is guided, can assess the segmentation quality in detail and modify the segmentation accordingly. One common modification is the editing of branches, for which we propose a semi-automatic sketch-based interaction metaphor. Additionally, the user can also influence the shape of the vessel wall or the centerline through sketching. To assess the value of our system we discuss feedback from medical experts and have performed a thorough evaluation.

  • 14.
    Diepenbrock, Stefan
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Context-aware volume navigation2011In: Pacific Visualization Symposium (PacificVis), 2011 IEEE, IEEE , 2011, p. 11-18Conference paper (Refereed)
    Abstract [en]

    The trackball metaphor is exploited in many applications where volumetric data needs to be explored. Although it provides an intuitive way to inspect the overall structure of objects of interest, an in-detail inspection can be tedious - or when cavities occur even impossible. Therefore we propose a context-aware navigation technique for the exploration of volumetric data. While navigation techniques for polygonal data require information about the rendered geometry, this strategy is not sufficient in the area of volume rendering. Since rendering parameters, e.g., the transfer function, have a strong influence on the visualized structures, they also affect the features to be explored. To compensate this effect we propose a novel image-based navigation approach for volumetric data. While being intuitive to use, the proposed technique allows the user to perform complex navigation tasks, in particular to get an overview as well as to perform an in-detail inspection without any navigation mode switches. The technique can be easily integrated into raycasting based volume renderers, needs no extra data structures and is independent of the data set as well as the rendering parameters. We will discuss the underlying concepts, explain how to enable the navigation at interactive frame rates using OpenCL, and evaluate its usability as well as its performance.

  • 15.
    Duran Rosich, David
    et al.
    ViRVIG Group, UPC Barcelona, Barcelona, Spain.
    Hermosilla, Pedro
    Visual Computing Group, U. Ulm, Ulm, Germany.
    Ropinski, Timo
    Visual Computing Group, U. Ulm, Ulm, Germany.
    Kozlikova, Barbora
    Masaryk University, Brno, Czech Republic.
    Vinacua, Àlvar
    ViRVIG Group, UPC Barcelona, Barcelona, Spain.
    Vazquez, Pere-Pau
    ViRVIG Group, UPC Barcelona, Barcelona, Spain.
    Visualization of Large Molecular Trajectories2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506Article in journal (Refereed)
    Abstract [en]

    The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.

  • 16.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Quantitative and Qualitative Analysis of the Perception of Semi-Transparent Structures in Direct Volume Rendering2018In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 37, no 6, p. 174-187Article in journal (Refereed)
    Abstract [en]

    Abstract Direct Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi-transparency is facilitated to convey the complexity of the data. Unfortunately, semi-transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi-transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi-transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images. In this study, we investigated the perceptual performance of these techniques and have compared them against each other in a large-scale quantitative user study with 300 participants. Each participant completed micro-tasks designed such that the aggregated feedback gives insight on how well these techniques aid the user to perceive depth and shape of objects. To further clarify the findings, we conducted a qualitative evaluation in which we interviewed three experienced visualization researchers, in order to find out if we can identify the benefits and shortcomings of the individual techniques.

  • 17.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ultrasound Surface Extraction Using Radial Basis Functions2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part II / [ed] George Bebis, Springer Publishing Company, 2014, Vol. 8888, p. 163-172Conference paper (Refereed)
    Abstract [en]

    Data acquired from ultrasound examinations is of interest not only for the physician, but also for the patient. While the physician uses the ultrasound data for diagnostic purposes the patient might be more interested in beautiful images in the case of prenatal imaging. Ultrasound data is noisy by nature and visually compelling 3D renderings are not always trivial to produce. This paper presents a technique which enables extraction of a smooth surface mesh from the ultrasound data by combining previous research in ultrasound processing with research in point cloud surface reconstruction. After filtering the ultrasound data using Variational Classification we extract a set of surface points. This set of points is then used to train an Adaptive Compactly Supported Radial Basis Functions system, a technique for surface reconstruction of noisy laser scan data. The resulting technique can be used to extract surfaces with adjustable smoothness and resolution and has been tested on various ultrasound datasets.

  • 18.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Visual Computing Group, Ulm University, Germany.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Coherence Maps for Blood Flow Exploration2016In: VCBM 16: Eurographics Workshop on Visual Computing for Biology and Medicine, Eurographics - European Association for Computer Graphics, 2016, p. 79-88Conference paper (Refereed)
    Abstract [en]

    Blood flow data from direct measurements (4D flow MRI) or numerical simulations opens new possibilities for the understanding of the development of cardiac diseases. However, before this new data can be used in clinical studies or for diagnosis, it is important to develop a notion of the characteristics of typical flow structures. To support this process we developed a novel blood flow clustering and exploration method. The method builds on the concept of coherent flow structures. Coherence maps for cross-sectional slices are defined to show the overall degree of coherence of the flow. In coherent regions the method summarizes the dominant blood flow using a small number of pathline representatives. In contrast to other clustering approaches the clustering is restricted to coherent regions and pathlines with low coherence values, which are not suitable for clustering and thus are not forced into clusters. The coherence map is based on the Finite-time Lyapunov Exponent (FTLE). It is created on selected planes in the inflow respective outflow area of a region of interest. The FTLE value measures the rate of separation of pathlines originating from this plane. Different to previous work using FTLE we do not focus on separating extremal lines but on local minima and regions of low FTLE intensities to extract coherent flow. The coherence map and the extracted clusters serve as basis for the flow exploration. The extracted clusters can be selected and inspected individually. Their flow rate and coherence provide a measure for their significance. Switching off clusters reduces the amount of occlusion and reveals the remaining part of the flow. The non-coherent regions can also be explored by interactive manual pathline seeding in the coherence map.

  • 19.
    Etiene, Tiago
    et al.
    University of Utah, UT 84112 USA .
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Scheidegger, Carlos
    ATandT Labs Research, NJ 07932 USA .
    Comba, Joao L. D.
    University of Federal Rio Grande do Sul, Brazil .
    Gustavo Nonato, Luis
    University of Sao Paulo, Brazil .
    Kirby, Robert M.
    University of Utah, UT 84112 USA .
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Silva, Claudio T.
    NYU, NY 11201 USA .
    Verifying Volume Rendering Using Discretization Error Analysis2014In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, no 1, p. 140-154Article in journal (Refereed)
    Abstract [en]

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.

  • 20.
    Hadwiger, Markus
    et al.
    VRVis Research Center, Vienna, Austria.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Siemens Corporate Research, Princeton, USA.
    Rezk Salama, Christof
    University of Siegen, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of M¨unster, Germany.
    Advanced illumination techniques for GPU volume raycasting2008In: ACM Siggraph Asia 2008 Courses, 2008, p. 1-11Conference paper (Refereed)
    Abstract [en]

    Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.

    The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.

  • 21.
    Hadwiger, Markus
    et al.
    VRVis Research Center, Vienna, Austria.
    Ljung, Patric
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Siemens Corporate Research, Princeton, USA.
    Rezk-Salama, Christof
    University of Siegen, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of Münster, Germany.
    Advanced Illumination Techniques for GPU-Based Volume Raycasting2009Other (Other academic)
    Abstract [en]

    Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.

    The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.

  • 22.
    Henzler, Philipp
    et al.
    Ulm University, Ulm, Germany.
    Rasche, Volker
    Ulm University, Ulm, Germany.
    Ropinski, Timo
    Ulm University, Ulm, Germany.
    Ritschel, Tobias
    University College London, London, United Kingdom.
    Single-image Tomography: 3D Volumes from 2D Cranial X-Rays2018In: Computer Graphics Forum (Proceedings of Eurographics 2018), Vol. 37, no 2, p. 377-388Article in journal (Refereed)
    Abstract [en]

    As many different 3D volumes could produce the same 2D x‐ray image, inverting this process is challenging. We show that recent deep learning‐based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed‐resolution volume which is then fused in a second step with the input x‐ray into a high‐resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer‐simulated 2D x‐ray images of 3D volumes scanned from 175 mammalian species. Future applications of our approach include stereoscopic rendering of legacy x‐ray images, re‐rendering of x‐rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x‐rays.

  • 23. Hermosilla Casajus, Pedro
    et al.
    Maisch, Sebastian
    Vázquez Alcocer, Pere Pau
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Improving Perception of Molecular Surface Visualizations by Incorporating Translucency Effects2018In: VCBM 2018, 2018, p. 185-195Conference paper (Refereed)
  • 24.
    Hermosilla, Pedro
    et al.
    Ulm University, Germany.
    Ritschel, Tobias
    University College London, United Kingdom.
    Vazquez, Pere-Pau
    Universitat Politècnica de Catalunya, Spain.
    Vinacua, Àlvar
    Universitat Politècnica de Catalunya, Spain.
    Ropinski, Timo
    Ulm University, Germany.
    Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds2018In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 37, no 6Article in journal (Refereed)
    Abstract [en]

    Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.

  • 25.
    Hermosilla, Pedro
    et al.
    VIRVIG Group, Universitat Politècnica de Catalunya, Barcelona, Spain.
    Vazquez, Pere-Pau
    VIRVIG Group, Universitat Politècnica de Catalunya, Barcelona, Spain.
    Vinacua, Àlvar
    VIRVIG Group, Universitat Politècnica de Catalunya, Barcelona, Spain.
    Ropinski, Timo
    Visual Computing Group, Ulm University, Ulm, Germany.
    A General Illumination Model for Molecular Visualization2018In: Computer Graphics Forum (Proceedings of EuroVis 2018), Vol. 37, no 3, p. 367-378Article in journal (Refereed)
    Abstract [en]

    Several visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom‐based representations are the Space‐filling, the Solvent Excluded Surface, the Balls‐and‐Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large‐scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom‐based molecular representations. The proposed model can be further evaluated in real‐time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.

  • 26.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ganestam, Per
    Lunds Universitet, Institutionen för Datavetenskap.
    Doggett, Michael
    Lunds Universitet, Institutionen för Datavetenskap.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Explicit Cache Management for Volume Ray-Casting on Parallel Architectures2012In: Eurographics Symposium on Parallel Graphics and Visualization (2012), Eurographics - European Association for Computer Graphics, 2012, p. 31-40Conference paper (Other academic)
    Abstract [en]

    A major challenge when designing general purpose graphics hardware is to allow efficient access to texture data. Although different rendering paradigms vary with respect to their data access patterns, there is no flexibility when it comes to data caching provided by the graphics architecture. In this paper we focus on volume ray-casting, and show the benefits of algorithm-aware data caching. Our Marching Caches method exploits inter-ray coherence and thus utilizes the memory layout of the highly parallel processors by allowing them to share data througha cache which marches along with the ray front. By exploiting Marching Caches we can apply higher-order reconstruction and enhancement filters to generate more accurate and enriched renderings with an improved rendering performance. We have tested our Marching Caches with seven different filters, e. g., Catmul-Rom, B-spline, ambient occlusion projection, and could show that a speed up of four times can be achieved compared to using the caching implicitly provided by the graphics hardware, and that the memory bandwidth to global memory can be reduced by orders of magnitude. Throughout the paper, we will introduce the Marching Cache concept, provide implementation details and discuss the performance and memory bandwidth impact when using different filters.

  • 27.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2364-2371Article in journal (Refereed)
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

  • 28.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Steneteg, Peter
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Englund, Rickard
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kottravel, Sathish
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Falk, Martin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Inviwo - A Visualization System with Usage Abstraction Levels2019In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626Article in journal (Refereed)
  • 29.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering2014In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, no 1, p. 27-51Article in journal (Refereed)
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

  • 30.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    State of The Art Report on Interactive Volume Rendering with Volumetric Illumination2012In: Eurographics 2012 - State of the Art Reports / [ed] Marie-Paule Cani and Fabio Ganovelli, Eurographics - European Association for Computer Graphics, 2012, p. 53-74Conference paper (Other academic)
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shadowing and scattering effects. In this article, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

  • 31.
    Kreiser, Julian
    et al.
    Visual Computing Group, Ulm University, Ulm, Germany.
    Freedman, Jacob
    Karolinska Institutet Stockholm, Stockholm, Sweden.
    Ropinski, Timo
    Visual Computing Group, Ulm University, Ulm, Germany.
    Visually Supporting Multiple Needle Placement in Irreversible Electroporation Interventions2018In: Computer Graphics Forum, Vol. 37, no 6, p. 59-71Article in journal (Refereed)
    Abstract [en]

    Irreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted. We have evaluated our visualization together with surgeons to investigate the practical use for IRE liver ablations. A quantitative study shows the effectiveness compared to a single 3D view placement method.

  • 32.
    Kreiser, Julian
    et al.
    Visual Computing Group, Ulm University, Ulm, Germany.
    Hann, Alexander
    Department of Internal Medicine I, Ulm University, Ulm, Germany.
    Zizer, Eugen
    Department of Internal Medicine I, Ulm University, Ulm, Germany.
    Ropinski, Timo
    Visual Computing Group, Ulm University, Ulm, Germany.
    Decision Graph Embedding for High-Resolution Manometry Diagnosis2018In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE SciVis 2017), Vol. 24, no 1, p. 873-882Article in journal (Refereed)
    Abstract [en]

    High-resolution manometry is an imaging modality which enables the categorization of esophageal motility disorders. Spatio-temporal pressure data along the esophagus is acquired using a tubular device and multiple test swallows are performed by the patient. Current approaches visualize these swallows as individual instances, despite the fact that aggregated metrics are relevant in the diagnostic process. Based on the current Chicago Classification, which serves as the gold standard in this area, we introduce a visualization supporting an efficient and correct diagnosis. To reach this goal, we propose a novel decision graph representing the Chicago Classification with workflow optimization in mind. Based on this graph, we are further able to prioritize the different metrics used during diagnosis and can exploit this prioritization in the actual data visualization. Thus, different disorders and their related parameters are directly represented and intuitively influence the appearance of our visualization. Within this paper, we introduce our novel visualization, justify the design decisions, and provide the results of a user study we performed with medical students as well as a domain expert. On top of the presented visualization, we further discuss how to derive a visual signature for individual patients that allows us for the first time to perform an intuitive comparison between subjects, in the form of small multiples.

  • 33.
    Kreiser, Julian
    et al.
    Visual Computing Group, Ulm University, Germany.
    Meuschke, Monique
    Department of Simulation and Graphics, University of Magdeburg, Germany.
    Mistelbauer, Gabriel
    Department of Simulation and Graphics, University of Magdeburg, Germany.
    Preim, Bernhard
    Department of Simulation and Graphics, University of Magdeburg, Germany.
    Ropinski, Timo
    Visual Computing Group, Ulm University, Germany.
    A Survey of Flattening-Based Medical Visualization Techniques2018In: Computer Graphics Forum (Proceedings of EuroVis 2018), Vol. 37, no 3, p. 597-624Article in journal (Refereed)
    Abstract [en]

    In many areas of medicine, visualization research can help with task simplification, abstraction or complexity reduction. A common visualization approach is to facilitate parameterization techniques which flatten a usually 3D object into a 2D plane. Within this state of the art report (STAR), we review such techniques used in medical visualization and investigate how they can be classified with respect to the handled data and the underlying tasks. Many of these techniques are inspired by mesh parameterization algorithms which help to project a triangulation in ℝ3 to a simpler domain in ℝ2. It is often claimed that this makes complex structures easier to understand and compare by humans and machines. Within this STAR we review such flattening techniques which have been developed for the analysis of the following medical entities: the circulation system, the colon, the brain, tumors, and bones. For each of these five application scenarios, we have analyzed the tasks and requirements, and classified the reviewed techniques with respect to a developed coding system. Furthermore, we present guidelines for the future development of flattening techniques in these areas.

  • 34.
    Lindemann, Florian
    et al.
    University of Munster.
    Ropinski, Timo
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    About the Influence of Illumination Models on Image Comprehension in Direct Volume Rendering2011In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 17, no 12, p. 1922-1931Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings.

  • 35.
    Lindemann, Florian
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Advanced Light Material Interaction for Direct Volume Rendering2010In: VG'10 Proceedings of the 8th IEEE/EG international conference on Volume Graphics, IEEE , 2010, p. 101-108Conference paper (Refereed)
    Abstract [en]

    In this paper we present a heuristic approach for simulating advanced light material interactions in the context of interactive volume rendering. In contrast to previous work, we are able to incorporate complex material functions, which allow to simulate reflectance and scattering. We exploit a common representation of these material properties based on spherical harmonic basis functions, to combine the achieved reflectance and scattering effects with natural lighting conditions, i. e., incorporating colored area light sources. To achieve these goals, we introduce a modified SH projection technique, which is not just tailored at a single material category, but adapts to the present material. Thus, reflecting and scattering materials as assigned trough the transfer function can be captured in a unified approach. We will describe the required extensions to the standard volume rendering integral and present an approximation which allows to realize the material effects in order to achieve interactive frame rates. By exploiting a combination of CPU and GPU processing, we are able to modify material properties and can change the illumination conditions interactively. We will demonstrate the outcome of the proposed approach based on renderings of real-world data sets and report the achieved computation times.

  • 36.
    Lindholm, Stefan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Falk, Martin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Hybrid Data Visualization Based On Depth Complexity Histogram Analysis2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 1, p. 74-85Article in journal (Refereed)
    Abstract [en]

    In many cases, only the combination of geometric and volumetric data sets is able to describe a single phenomenon under observation when visualizing large and complex data. When semi-transparent geometry is present, correct rendering results require sorting of transparent structures. Additional complexity is introduced as the contributions from volumetric data have to be partitioned according to the geometric objects in the scene. The A-buffer, an enhanced framebuffer with additional per-pixel information, has previously been introduced to deal with the complexity caused by transparent objects. In this paper, we present an optimized rendering algorithm for hybrid volume-geometry data based on the A-buffer concept. We propose two novel components for modern GPUs that tailor memory utilization to the depth complexity of individual pixels. The proposed components are compatible with modern A-buffer implementations and yield performance gains of up to eight times compared to existing approaches through reduced allocation and reuse of fast cache memory. We demonstrate the applicability of our approach and its performance with several examples from molecular biology, space weather, and medical visualization containing both, volumetric data and geometric structures.

  • 37.
    Liu, Bingchen
    et al.
    University of Auckland, New Zealand.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Nash, Martyn
    University of Auckland, New Zealand.
    Nielsen, Poul
    University of Auckland, New Zealand.
    Wünsche, Burkhard
    University of Auckland, New Zealand.
    GPU-Accelerated Direct Volume Rendering of Finite Element Data Sets2012Conference paper (Other academic)
    Abstract [en]

    Direct Volume Rendering of Finite Element models is challengingsince the visualisation process is performed in worldcoordinates, whereas data fields are usually defined overthe elements’ material coordinate system. In this paper wepresent a framework for Direct Volume Rendering of FiniteElement models. We present several novel implementationsvisualising Finite Element data directly without requiring resamplinginto world coordinates. We evaluate the methodsusing several biomedical Finite Element models. Our GPUimplementation of ray-casting in material coordinates usingdepth peeling is several orders of magnitude faster than thecorresponding CPU approach, and our new ray interpolationapproach achieves near interactive frame rates for high-orderfinite element models at high resolutions.

  • 38.
    Mensmann, Jörg
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    A GPU-Supported Lossless Compression Scheme for Rendering Time-Varying Volume Data2010In: VG'10 Proceedings of the 8th IEEE/EG international conference on Volume Graphics, IEEE , 2010, p. 109-116Conference paper (Refereed)
    Abstract [en]

    Since the size of time-varying volumetric data sets typically exceeds the amount of available GPU and main memory, out-of-core streaming techniques are required to support interactive rendering. To deal with the performance bottlenecks of hard-disk transfer rate and graphics bus bandwidth, we present a hybrid CPU/GPU scheme for lossless compression and data streaming that combines a temporal prediction model, which allows to exploit coherence between time steps, and variable-length coding with a fast block compression algorithm. This combination becomes possible by exploiting the CUDA computing architecture for unpacking and assembling data packets on the GPU. The system allows near-interactive performance even for rendering large real-world data sets with a low signal-to-noise-ratio, while not degrading image quality. It uses standard volume raycasting and can be easily combined with existing acceleration methods and advanced visualization techniques.

  • 39.
    Mensmann, Jörg
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Slab-Based Raycasting: Efficient Volume Rendering with CUDA2009In: HPG 2009 High Performance Graphics, 2009Conference paper (Other academic)
    Abstract [en]

    GPU-based raycasting is the state-of-the-art rendering technique for interactive volume visualization. The ray traversal is usually implemented in a fragment shader, utilizing the hardware in a way that was not originally intended. New programming interfaces for stream processing, such as CUDA, support a more general programming model and the use of additional device features, which are not accessible through traditional shader programming.We propose a slab-based raycasting technique that is modeled specifically to use these features to accelerate volume rendering. This technique is based on experience gained from comparing fragment shader implementations of basic raycasting to implementations directly translated to CUDA kernels. The comparison covers direct volume rendering with a variety of optional features, e.g., gradient and lighting calculations.

  • 40.
    Mensmann, Jörg
    et al.
    University of Münster, Germany .
    Ropinski, Timo
    University of Münster, Germany .
    Hinrichs, Klaus
    University of Münster, Germany .
    Slab-Based Raycasting: Exploiting GPU Computing for Volume Visualization2011In: Computer Vision, Imaging and Computer Graphics. Theory and Applications / [ed] Paul Richard, José Braz, Springer Berlin/Heidelberg, 2011, p. 246-259Conference paper (Refereed)
    Abstract [en]

    GPU-based raycasting is the state-of-the-art rendering technique for interactive volume visualization. The ray traversal is usually implemented in a fragment shader, utilizing the hardware in a way that was not originally intended. New programming interfaces for stream processing, such as CUDA, support a more general programming model and the use of additional device features, which are not accessible through traditional shader programming. In this paper we propose a slab-based raycasting technique that is modeled specifically to use these features to accelerate volume rendering. This technique is based on experience gained from comparing fragment shader implementations of basic raycasting to implementations directly translated to CUDA kernels. The comparison covers direct volume rendering with a variety of optional features, e.g., gradient and lighting calculations. Our findings are supported by benchmarks of typical volume visualization scenarios. We conclude that new stream processing models can only gain a small performance advantage when directly porting the basic raycasting algorithm. However, they can be advantageous through novel acceleration methods which use the hardware features not available to shader implementations.

  • 41.
    Meyer-Spradow, Jennis
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. University of Münster, Germany.
    Mensmann, Jörg
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Interactive Design and Debugging of GPU-based Volume Visualizations2010In: GRAPP 2010, 2010, p. 239-245Conference paper (Other academic)
    Abstract [en]

    There is a growing need for custom visualization applications to deal with the rising amounts of volume data to be analyzed in fields like medicine, seismology, and meteorology. Visual programming techniques have been used in visualization and other fields to analyze and visualize data in an intuitive manner. However, this additional step of abstraction often results in a performance penalty during the actual rendering. In order to prevent this impact, a careful modularization of the required processing steps is necessary, which provides flexibility and good performance at the same time. In this paper, we will describe the technical foundations as well as the possible applications of such a modularization for GPU-based volume raycasting, which can be considered the state-of-the-art technique for interactive volume rendering. Based on the proposed modularization on a functional level, we will show how to integrate GPU-based volume ray-casting in a visual programming environment in such a way that a high degree of flexibility is achieved without any performance impact.

  • 42.
    Meyer-Spradow, Jennis
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Mensmann, Jörg
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Voreen: A Rapid-Prototyping Environment for Ray-Casting-Based Volume Visualizations2009In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756, Vol. 29, no 6, p. 6-13Article in journal (Refereed)
    Abstract [en]

    By splitting a complex ray-casting process into different tasks performed on different processors, Voreen provides a lot of flexibility because users can intervene at different points during ray casting. Voreen's object-oriented design lets users easily create customized processor classes that cooperate seamlessly with existing classes. A user-friendly GUI supports rapid prototyping of visualization ideas. We've implemented several applications based on our library. In the future, we'd like to further extend Voreen's capabilities to make visualization prototyping even easier on all abstraction levels. Thus, we plan to realize a set of dedicated processor skeletons, which are solely configured through shader programs and can thus be modified at runtime.

  • 43.
    Meß, Christian
    et al.
    University of Münster, Germany .
    Ropinski, Timo
    University of Münster, Germany .
    Efficient Acquisition and Clustering of Local Histograms for Representing Voxel Neighborhoods2010In: VG'10 Proceedings of the 8th IEEE/EG international conference on Volume Graphics, 2010, p. 117-124Conference paper (Refereed)
    Abstract [en]

    In the past years many interactive volume rendering techniques have been proposed, which exploit the neighboring environment of a voxel during rendering. In general on-the-fly acquisition of this environment is infeasible due to the high amount of data to be taken into account. To bypass this problem we propose a GPU preprocessing pipeline which allows to acquire and compress the neighborhood information for each voxel. Therefore, we represent the environment around each voxel by generating a local histogram (LH) of the surrounding voxel densities. By performing a vector quantization (VQ), the high number of LHs is than reduced to a few hundred cluster centroids, which are accessed through an index volume. To accelerate the required computational expensive processing steps, we take advantage of the highly parallel nature of this task and realize it using CUDA. For the LH compression we use an optimized hybrid CPU/GPU implementation of the k-means VQ algorithm. While the assignment of each LH to its nearest centroid is done on the GPU using CUDA, centroid recalculation after each iteration is done on the CPU. Our results demonstrate the applicability of the precomputed data, while the performance is increased by a factor of about 10 compared to previous approaches.

  • 44.
    Nguyen, Khoa
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Deriving and Visualizing Uncertainty in Kinetic PET Modeling2012In: Eurographics Workshop on Visual Computing for Biology and Medicine, 2012 / [ed] Timo Ropinski and Anders Ynnerman and Charl Botha and Jos Roerdink, The Eurographics Association , 2012, p. 107-114Conference paper (Refereed)
    Abstract [en]

    Kinetic modeling is the tool of choice when developing new positron emission tomography (PET) tracers for quantitative functional analysis. Several approaches are widely used to facilitate this process. While all these approaches are inherently different, they are still subject to uncertainty arising from various stages of the modeling process. In this paper we propose a novel approach for deriving and visualizing uncertainty in kinetic PET modeling. We distinguish between intra- and inter-model uncertainties. While intra-model uncertainty allows us to derive uncertainty based on a single modeling approach, inter-model uncertainty arises from the differences of the results of different approaches. To derive intra-model uncertainty we exploit the covariance matrix analysis. The inter-model uncertainty is derived by comparing the outcome of three standard kinetic PET modeling approaches. We derive and visualize this uncertainty to exploit it as a basis for changing model input parameters with the ultimate goal to reduce the modeling uncertainty and thus obtain a more realistic model of the tracer under investigation. To support this uncertainty reduction process, we visually link abstract and spatial data by introducing a novel visualization approach based on the ThemeRiver metaphor, which has been modified to support the uncertainty-aware visualization of parameter changes between spatial locations. We have investigated the benefits of the presented concepts by conducting an evaluation with domain experts.

  • 45.
    Parulek, Julius
    et al.
    University of Bergen, Norway.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bruckner, Stefan
    University of Bergen, Norway.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Viola, Ivan
    University of Bergen, Norway; Vienna University of Technology, Austria.
    Continuous Levels-of-Detail and Visual Abstraction for Seamless Molecular Visualization2014In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, no 6, p. 276-287Article in journal (Refereed)
    Abstract [en]

    Molecular visualization is often challenged with rendering of large molecular structures in real time. We introduce a novel approach that enables us to show even large protein complexes. Our method is based on the level-of-detail concept, where we exploit three different abstractions combined in one visualization. Firstly, molecular surface abstraction exploits three different surfaces, solvent-excluded surface (SES), Gaussian kernels and van der Waals spheres, combined as one surface by linear interpolation. Secondly, we introduce three shading abstraction levels and a method for creating seamless transitions between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation of a cluster of atoms with constant shading and without contours provide the context. Thirdly, we propose a hierarchical abstraction based on a set of clusters formed on molecular atoms. All three abstraction models are driven by one importance function classifying the scene into the near-, mid- and far-field. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts.

  • 46.
    Praßni, Jörg-Stefan
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Efficient Boundary Detection and Transfer Function Generation in Direct Volume Rendering2009In: VMV 2009, 2009, p. 285-294Conference paper (Other academic)
    Abstract [en]

    In this paper we present an efficient technique for the construction of LH histograms which, in contrast to previous work, does not require an expensive tracking of intensity profiles across boundaries and therefore allows an LH classification in real time. We propose a volume exploration system for the semi-automatic generation of LH transfer functions, which does not require any user interaction within the transfer function domain. During an iterative process the user extracts features by marking them directly in the volume rendered image. The system automatically detects a marked feature's boundary by exploiting our novel LH technique and generates a suitable component transfer function that associates user-specified optical properties with the region representing the boundary in the transfer function space. The component functions thus generated are automatically combined to produce an LH transfer function to be used for rendering.

  • 47.
    Praßni, Jörg-Stefan
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Uncertainty-Aware Guided Volume Segmentation2010In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 16, no 6, p. 1358-1365Article in journal (Refereed)
    Abstract [en]

    Although direct volume rendering is established as a powerful tool for the visualization of volumetric data, efficient and reliable feature detection is still an open topic. Usually, a tradeoff between fast but imprecise classification schemes and accurate but time-consuming segmentation techniques has to be made. Furthermore, the issue of uncertainty introduced with the feature detection process is completely neglected by the majority of existing approaches. In this paper we propose a guided probabilistic volume segmentation approach that focuses on the minimization of uncertainty. In an iterative process, our system continuously assesses uncertainty of a random walker-based segmentation in order to detect regions with high ambiguity, to which the user's attention is directed to support the correction of potential misclassifications. This reduces the risk of critical segmentation errors and ensures that information about the segmentation's reliability is conveyed to the user in a dependable way. In order to improve the efficiency of the segmentation process, our technique does not only take into account the volume data to be segmented, but also enables the user to incorporate classification information. An interactive workflow has been achieved by implementing the presented system on the GPU using the OpenCL API. Our results obtained for several medical data sets of different modalities, including brain MRI and abdominal CT, demonstrate the reliability and efficiency of our approach.

  • 48.
    Praßni, Jörg-Stefan
    et al.
    University of Münster, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Mensmann, Jörg
    University of Münster, Germany.
    Hinrichs, Klaus
    University of Münster, Germany.
    Shape-based Transfer Functions for Volume Visualization2010In: Pacific Visualization Symposium (PacificVis), 2010 IEEE, IEEE , 2010, p. 9-16Conference paper (Refereed)
    Abstract [en]

    We present a novel classification technique for volume visualization that takes the shape of volumetric features into account. The presented technique enables the user to distinguish features based on their 3D shape and to assign individual optical properties to these. Based on a rough pre-segmentation that can be done by windowing, we exploit the curve-skeleton of each volumetric structure in order to derive a shape descriptor similar to those used in current shape recognition algorithms. The shape descriptor distinguishes three main shape classes: longitudinal, surface-like, and blobby shapes. In contrast to previous approaches, the classification is not performed on a per-voxel level but assigns a uniform shape descriptor to each feature and therefore allows a more intuitive user interface for the assignment of optical properties. By using the proposed technique, it becomes for instance possible to distinguish blobby heart structures filled with contrast agents from potentially occluding vessels and rib bones. After introducing the basic concepts, we show how the presented technique performs on real world data, and we discuss current limitations.

  • 49.
    Praßni, Jörg-Stefan
    et al.
    University of Münster, Germany.
    Storm, Klaus
    University of Bonn, Germany.
    Ropinski, Timo
    University of Münster, Germany.
    Tiemann, Klaus
    University Hospital of Münster, Germany.
    Single Shot Quantification of Gas-Filled Microbubbles with Ultrasound2010Conference paper (Other academic)
    Abstract [en]

    In this work, we present a novel semi-automatic technique for the quantification of MBs, called "single shot quantification" (SSQ). In contrast to previous approaches, SSQ does not require a scan of the entire organ, but allows to determine the MB concentration by analyzing a time series of ultrasound frames which are scanned at a single position.

  • 50.
    Ropinski, Timo
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Diepenbrock, Stefan
    University of Munster, Germany .
    Bruckner, Stefan
    Vienna University of Technology, Austria .
    Hinrichs, Klaus
    University of Munster, Germany .
    Groeller, Eduard
    Vienna University of Technology, Austria .
    Unified Boundary-Aware Texturing for Interactive Volume Rendering2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 11, p. 1942-1955Article in journal (Refereed)
    Abstract [en]

    In this paper, we describe a novel approach for applying texture mapping to volumetric data sets. In contrast to previous approaches, the presented technique enables a unified integration of 2D and 3D textures and thus allows to emphasize material boundaries as well as volumetric regions within a volumetric data set at the same time. One key contribution of this paper is a parametrization technique for volumetric data sets, which takes into account material boundaries and volumetric regions. Using this technique, the resulting parametrizations of volumetric data sets enable texturing effects which create a higher degree of realism in volume rendered images. We evaluate the quality of the parametrization and demonstrate the usefulness of the proposed concepts by combining volumetric texturing with volumetric lighting models to generate photorealistic volume renderings. Furthermore, we show the applicability in the area of illustrative visualization.

12 1 - 50 of 63
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf