liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Ropinski, Timo, ProfessorORCID iD iconorcid.org/0000-0002-7857-5512
Publications (10 of 63) Show all publications
Jönsson, D., Steneteg, P., Sundén, E., Englund, R., Kottravel, S., Falk, M., . . . Ropinski, T. (2020). Inviwo - A Visualization System with Usage Abstraction Levels. IEEE Transactions on Visualization and Computer Graphics, 26(11), 3241-3254
Open this publication in new window or tab >>Inviwo - A Visualization System with Usage Abstraction Levels
Show others...
2020 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, Vol. 26, no 11, p. 3241-3254Article in journal (Refereed) Published
Abstract [en]

The complexity of todays visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for insta

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Data visualization; Visualization; Pipelines; Debugging; Interoperability; Documentation; Games; Visualization systems; data visualization; visual analytics; data analysis; computer graphics; image processing
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-160860 (URN)10.1109/TVCG.2019.2920639 (DOI)000574745100009 ()31180858 (PubMedID)
Funder
Swedish e‐Science Research CenterELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Research Council, 2015-05462Knut and Alice Wallenberg Foundation, 2013- 0076
Note

Funding agencies:  Swedish e-Science Research Centre (SeRC); Deutsche Forschungsgemeinschaft (DFG)German Research Foundation (DFG) [RO3408/3-1]; ExcellenceCenter at Linkoping and Lund in Information Technology (ELLIIT); Knut and Alice Wallenberg Foundation (KAW)Knut & Alice

Available from: 2019-10-10 Created: 2019-10-10 Last updated: 2022-06-03
Hermosilla, P., Vazquez, P.-P., Vinacua, À. & Ropinski, T. (2018). A General Illumination Model for Molecular Visualization. Computer Graphics Forum (Proceedings of EuroVis 2018), 37(3), 367-378
Open this publication in new window or tab >>A General Illumination Model for Molecular Visualization
2018 (English)In: Computer Graphics Forum (Proceedings of EuroVis 2018), Vol. 37, no 3, p. 367-378Article in journal (Refereed) Published
Abstract [en]

Several visual representations have been developed over the years to visualize molecular structures, and to enable a better understanding of their underlying chemical processes. Today, the most frequently used atom‐based representations are the Space‐filling, the Solvent Excluded Surface, the Balls‐and‐Sticks, and the Licorice models. While each of these representations has its individual benefits, when applied to large‐scale models spatial arrangements can be difficult to interpret when employing current visualization techniques. In the past it has been shown that global illumination techniques improve the perception of molecular visualizations; unfortunately existing approaches are tailored towards a single visual representation. We propose a general illumination model for molecular visualization that is valid for different representations. With our illumination model, it becomes possible, for the first time, to achieve consistent illumination among all atom‐based molecular representations. The proposed model can be further evaluated in real‐time, as it employs an analytical solution to simulate diffuse light interactions between objects. To be able to derive such a solution for the rather complicated and diverse visual representations, we propose the use of regression analysis together with adapted parameter sampling strategies as well as shape parametrization guided sampling, which are applied to the geometric building blocks of the targeted visual representations. We will discuss the proposed sampling strategies, the derived illumination model, and demonstrate its capabilities when visualizing several dynamic molecules.

Place, publisher, year, edition, pages
Wiley-Blackwell Publishing Inc., 2018
National Category
Medical Biotechnology (with a focus on Cell Biology (including Stem Cell Biology), Molecular Biology, Microbiology, Biochemistry or Biopharmacy)
Identifiers
urn:nbn:se:liu:diva-152559 (URN)10.1111/cgf.13426 (DOI)000438024300033 ()2-s2.0-85050306430 (Scopus ID)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-11-14Bibliographically approved
Bruckner, S., Isenberg, T., Ropinski, T. & Wiebel, A. (2018). A Model of Spatial Directness in Interactive Visualization. IEEE Transactions on Visualization and Computer Graphics
Open this publication in new window or tab >>A Model of Spatial Directness in Interactive Visualization
2018 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506Article in journal (Refereed) Epub ahead of print
Abstract [en]

We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Visualization, direct interaction, human-computer interaction (HCI)
National Category
Media Engineering Interaction Technologies
Identifiers
urn:nbn:se:liu:diva-152558 (URN)10.1109/TVCG.2018.2848906 (DOI)2-s2.0-85049079176 (Scopus ID)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-11-13Bibliographically approved
Kreiser, J., Meuschke, M., Mistelbauer, G., Preim, B. & Ropinski, T. (2018). A Survey of Flattening-Based Medical Visualization Techniques. Computer Graphics Forum (Proceedings of EuroVis 2018), 37(3), 597-624
Open this publication in new window or tab >>A Survey of Flattening-Based Medical Visualization Techniques
Show others...
2018 (English)In: Computer Graphics Forum (Proceedings of EuroVis 2018), Vol. 37, no 3, p. 597-624Article in journal (Refereed) Published
Abstract [en]

In many areas of medicine, visualization research can help with task simplification, abstraction or complexity reduction. A common visualization approach is to facilitate parameterization techniques which flatten a usually 3D object into a 2D plane. Within this state of the art report (STAR), we review such techniques used in medical visualization and investigate how they can be classified with respect to the handled data and the underlying tasks. Many of these techniques are inspired by mesh parameterization algorithms which help to project a triangulation in ℝ3 to a simpler domain in ℝ2. It is often claimed that this makes complex structures easier to understand and compare by humans and machines. Within this STAR we review such flattening techniques which have been developed for the analysis of the following medical entities: the circulation system, the colon, the brain, tumors, and bones. For each of these five application scenarios, we have analyzed the tasks and requirements, and classified the reviewed techniques with respect to a developed coding system. Furthermore, we present guidelines for the future development of flattening techniques in these areas.

Place, publisher, year, edition, pages
Wiley-Blackwell Publishing Inc., 2018
National Category
Other Medical Engineering
Identifiers
urn:nbn:se:liu:diva-152560 (URN)10.1111/cgf.13445 (DOI)000438024300051 ()2-s2.0-85050273338 (Scopus ID)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-11-14Bibliographically approved
Kreiser, J., Hann, A., Zizer, E. & Ropinski, T. (2018). Decision Graph Embedding for High-Resolution Manometry Diagnosis. IEEE Transactions on Visualization and Computer Graphics, 24(1), 873-882
Open this publication in new window or tab >>Decision Graph Embedding for High-Resolution Manometry Diagnosis
2018 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE SciVis 2017), Vol. 24, no 1, p. 873-882Article in journal (Refereed) Published
Abstract [en]

High-resolution manometry is an imaging modality which enables the categorization of esophageal motility disorders. Spatio-temporal pressure data along the esophagus is acquired using a tubular device and multiple test swallows are performed by the patient. Current approaches visualize these swallows as individual instances, despite the fact that aggregated metrics are relevant in the diagnostic process. Based on the current Chicago Classification, which serves as the gold standard in this area, we introduce a visualization supporting an efficient and correct diagnosis. To reach this goal, we propose a novel decision graph representing the Chicago Classification with workflow optimization in mind. Based on this graph, we are further able to prioritize the different metrics used during diagnosis and can exploit this prioritization in the actual data visualization. Thus, different disorders and their related parameters are directly represented and intuitively influence the appearance of our visualization. Within this paper, we introduce our novel visualization, justify the design decisions, and provide the results of a user study we performed with medical students as well as a domain expert. On top of the presented visualization, we further discuss how to derive a visual signature for individual patients that allows us for the first time to perform an intuitive comparison between subjects, in the form of small multiples.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Small multiples, manometry, chicago classification
National Category
Other Medical Engineering
Identifiers
urn:nbn:se:liu:diva-152564 (URN)10.1109/TVCG.2017.2744299 (DOI)000418038400086 ()2-s2.0-85028702559 (Scopus ID)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-11-13Bibliographically approved
Hermosilla Casajus, P., Maisch, S., Vázquez Alcocer, P. P. & Ropinski, T. (2018). Improving Perception of Molecular Surface Visualizations by Incorporating Translucency Effects. In: VCBM 2018: . Paper presented at Eurographics Workshop on Visual Computing for Biology and Medicine, Granada, Spain, September 20 – 21, 2018 (pp. 185-195). The Eurographics Association
Open this publication in new window or tab >>Improving Perception of Molecular Surface Visualizations by Incorporating Translucency Effects
2018 (English)In: VCBM 2018, The Eurographics Association , 2018, p. 185-195Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
The Eurographics Association, 2018
Series
Eurographics Workshop on Visual Computing for Biomedicine, ISSN 2070-5786
National Category
Medical Biotechnology (with a focus on Cell Biology (including Stem Cell Biology), Molecular Biology, Microbiology, Biochemistry or Biopharmacy)
Identifiers
urn:nbn:se:liu:diva-152566 (URN)10.2312/vcbm.20181244 (DOI)978-3-03868-056-7 (ISBN)
Conference
Eurographics Workshop on Visual Computing for Biology and Medicine, Granada, Spain, September 20 – 21, 2018
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2020-07-02
Hermosilla, P., Ritschel, T., Vazquez, P.-P., Vinacua, À. & Ropinski, T. (2018). Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds. ACM Transactions on Graphics, 37(6)
Open this publication in new window or tab >>Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds
Show others...
2018 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 37, no 6Article in journal (Refereed) Published
Abstract [en]

Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2018
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-152556 (URN)10.1145/3272127.3275110 (DOI)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-12-14Bibliographically approved
Englund, R. & Ropinski, T. (2018). Quantitative and Qualitative Analysis of the Perception of Semi-Transparent Structures in Direct Volume Rendering. Computer graphics forum (Print), 37(6), 174-187
Open this publication in new window or tab >>Quantitative and Qualitative Analysis of the Perception of Semi-Transparent Structures in Direct Volume Rendering
2018 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 37, no 6, p. 174-187Article in journal (Refereed) Published
Abstract [en]

Abstract Direct Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi-transparency is facilitated to convey the complexity of the data. Unfortunately, semi-transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi-transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi-transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images. In this study, we investigated the perceptual performance of these techniques and have compared them against each other in a large-scale quantitative user study with 300 participants. Each participant completed micro-tasks designed such that the aggregated feedback gives insight on how well these techniques aid the user to perceive depth and shape of objects. To further clarify the findings, we conducted a qualitative evaluation in which we interviewed three experienced visualization researchers, in order to find out if we can identify the benefits and shortcomings of the individual techniques.

Place, publisher, year, edition, pages
Wiley-Blackwell, 2018
Keywords
scientific visualization, volume visualization, Computing methodologies → Perception; Human-centred computing → Scientific visualization
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-149551 (URN)10.1111/cgf.13320 (DOI)000437272800012 ()
Note

Funding agencies: Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Swedish e-Science Research Centre (SeRC)

Available from: 2018-07-05 Created: 2018-07-05 Last updated: 2018-08-02
Henzler, P., Rasche, V., Ropinski, T. & Ritschel, T. (2018). Single-image Tomography: 3D Volumes from 2D Cranial X-Rays. Computer Graphics Forum (Proceedings of Eurographics 2018), 37(2), 377-388
Open this publication in new window or tab >>Single-image Tomography: 3D Volumes from 2D Cranial X-Rays
2018 (English)In: Computer Graphics Forum (Proceedings of Eurographics 2018), Vol. 37, no 2, p. 377-388Article in journal (Refereed) Published
Abstract [en]

As many different 3D volumes could produce the same 2D x‐ray image, inverting this process is challenging. We show that recent deep learning‐based convolutional neural networks can solve this task. As the main challenge in learning is the sheer amount of data created when extending the 2D image into a 3D volume, we suggest firstly to learn a coarse, fixed‐resolution volume which is then fused in a second step with the input x‐ray into a high‐resolution volume. To train and validate our approach we introduce a new dataset that comprises of close to half a million computer‐simulated 2D x‐ray images of 3D volumes scanned from 175 mammalian species. Future applications of our approach include stereoscopic rendering of legacy x‐ray images, re‐rendering of x‐rays including changes of illumination, view pose or geometry. Our evaluation includes comparison to previous tomography work, previous learning methods using our data, a user study and application to a set of real x‐rays.

Place, publisher, year, edition, pages
Wiley-Blackwell Publishing Inc., 2018
National Category
Other Medical Engineering
Identifiers
urn:nbn:se:liu:diva-152561 (URN)10.1111/cgf.13369 (DOI)000434085600034 ()2-s2.0-85051549169 (Scopus ID)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-11-14Bibliographically approved
Duran Rosich, D., Hermosilla, P., Ropinski, T., Kozlikova, B., Vinacua, À. & Vazquez, P.-P. (2018). Visualization of Large Molecular Trajectories. IEEE Transactions on Visualization and Computer Graphics
Open this publication in new window or tab >>Visualization of Large Molecular Trajectories
Show others...
2018 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506Article in journal (Refereed) Epub ahead of print
Abstract [en]

The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Molecular visualization, simulation inspection, long trajectories
National Category
Medical Biotechnology (with a focus on Cell Biology (including Stem Cell Biology), Molecular Biology, Microbiology, Biochemistry or Biopharmacy)
Identifiers
urn:nbn:se:liu:diva-152557 (URN)10.1109/TVCG.2018.2864851 (DOI)30207955 (PubMedID)2-s2.0-85053116654 (Scopus ID)
Available from: 2018-11-06 Created: 2018-11-06 Last updated: 2018-11-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7857-5512

Search in DiVA

Show all publications