liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 10) Show all publications
Jönsson, D., Steneteg, P., Sundén, E., Englund, R., Kottravel, S., Falk, M., . . . Ropinski, T. (2019). Inviwo - A Visualization System with Usage Abstraction Levels. IEEE Transactions on Visualization and Computer Graphics
Open this publication in new window or tab >>Inviwo - A Visualization System with Usage Abstraction Levels
Show others...
2019 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE, 2019
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-160860 (URN)10.1109/TVCG.2019.2920639 (DOI)
Funder
Swedish e‐Science Research CenterELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Research Council, 2015-05462Knut and Alice Wallenberg Foundation, 2013- 0076
Available from: 2019-10-10 Created: 2019-10-10 Last updated: 2019-10-10
Jönsson, D., Sundén, E., Läthén, G. & W. Hachette, I. (2017). Method and system for volume rendering of medical images. Google Patentsus US9552663B2.
Open this publication in new window or tab >>Method and system for volume rendering of medical images
2017 (English)Patent (Other (popular science, discussion, etc.))
Place, publisher, year, edition, pages
Google Patents, 2017
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-160861 (URN)
Patent
US US9552663B2
Note

US Patent 9,552,663

Available from: 2019-10-10 Created: 2019-10-10 Last updated: 2019-10-10
Kottravel, S., Falk, M., Sundén, E. & Ropinski, T. (2015). Coverage-Based Opacity Estimation for Interactive Depth of Field in Molecular Visualization. In: IEEE Pacific Visualization Symposium (PacificVis 2015): . Paper presented at IEEE Pacific Visualization Symposium (PacificVis) (pp. 255-262). IEEE Computer Society
Open this publication in new window or tab >>Coverage-Based Opacity Estimation for Interactive Depth of Field in Molecular Visualization
2015 (English)In: IEEE Pacific Visualization Symposium (PacificVis 2015), IEEE Computer Society, 2015, p. 255-262Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we introduce coverage-based opacity estimation to achieve Depth of Field (DoF) effects when visualizing molecular dynamics (MD) data. The proposed algorithm is a novel object-based approach which eliminates many of the shortcomings of state-of-the-art image-based DoF algorithms. Based on observations derived from a physically-correct reference renderer, coverage-based opacity estimation exploits semi-transparency to simulate the blur inherent to DoF effects. It achieves high quality DoF effects, by augmenting each atom with a semi-transparent shell, which has a radius proportional to the distance from the focal plane of the camera. Thus, each shell represents an additional coverage area whose opacity varies radially, based on our observations derived from the results of multi-sampling DoF algorithms. By using the proposed technique, it becomes possible to generate high quality visual results, comparable to those achieved through ground-truth multi-sampling algorithms. At the same time, we obtain a significant speedup which is essential for visualizing MD data as it enables interactive rendering. In this paper, we derive the underlying theory, introduce coverage-based opacity estimation and demonstrate how it can be applied to real world MD data in order to achieve DoF effects. We further analyze the achieved results with respect to performance as well as quality and show that they are comparable to images generated with modern distributed ray tracing engines.

Place, publisher, year, edition, pages
IEEE Computer Society, 2015
Series
IEEE Pacific Visualization Symposium, ISSN 2165-8765
Keywords
molecular visualization, depth of field, opacity
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-128013 (URN)10.1109/PACIFICVIS.2015.7156385 (DOI)000380542200037 ()978-1-4673-6879-7 (ISBN)
Conference
IEEE Pacific Visualization Symposium (PacificVis)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish e‐Science Research Center
Available from: 2016-05-16 Created: 2016-05-16 Last updated: 2019-11-07Bibliographically approved
Lindholm, S., Falk, M., Sundén, E., Bock, A., Ynnerman, A. & Ropinski, T. (2015). Hybrid Data Visualization Based On Depth Complexity Histogram Analysis. Computer graphics forum (Print), 34(1), 74-85
Open this publication in new window or tab >>Hybrid Data Visualization Based On Depth Complexity Histogram Analysis
Show others...
2015 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 1, p. 74-85Article in journal (Refereed) Published
Abstract [en]

In many cases, only the combination of geometric and volumetric data sets is able to describe a single phenomenon under observation when visualizing large and complex data. When semi-transparent geometry is present, correct rendering results require sorting of transparent structures. Additional complexity is introduced as the contributions from volumetric data have to be partitioned according to the geometric objects in the scene. The A-buffer, an enhanced framebuffer with additional per-pixel information, has previously been introduced to deal with the complexity caused by transparent objects. In this paper, we present an optimized rendering algorithm for hybrid volume-geometry data based on the A-buffer concept. We propose two novel components for modern GPUs that tailor memory utilization to the depth complexity of individual pixels. The proposed components are compatible with modern A-buffer implementations and yield performance gains of up to eight times compared to existing approaches through reduced allocation and reuse of fast cache memory. We demonstrate the applicability of our approach and its performance with several examples from molecular biology, space weather, and medical visualization containing both, volumetric data and geometric structures.

Place, publisher, year, edition, pages
John Wiley & Sons, 2015
National Category
Computer and Information Sciences Computer Sciences
Identifiers
urn:nbn:se:liu:diva-110238 (URN)10.1111/cgf.12460 (DOI)000350145600008 ()
Note

On the day of the defence date the status of this publication was Manuscript.

Available from: 2014-09-04 Created: 2014-09-04 Last updated: 2019-12-04Bibliographically approved
Sundén, E., Kottravel, S. & Ropinski, T. (2015). Multimodal volume illumination. Computers & graphics, 50, 47-60
Open this publication in new window or tab >>Multimodal volume illumination
2015 (English)In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 50, p. 47-60Article in journal (Refereed) Published
Abstract [en]

Despite the increasing importance of multimodal volumetric data acquisition and the recent progress in advanced volume illumination, interactive multimodal volume illumination remains an open challenge. As a consequence, the perceptual benefits of advanced volume illumination algorithms cannot be exploited when visualizing multimodal data - a scenario where increased data complexity urges for improved spatial comprehension. The two main factors hindering the application of advanced volumetric illumination models to multimodal data sets are rendering complexity and memory consumption. Solving the volume rendering integral by considering multimodal illumination increases the sampling complexity. At the same time, the increased storage requirements of multimodal data sets forbid to exploit precomputation results, which are often facilitated by advanced volume illumination algorithms to reduce the amount of per-frame computations. In this paper, we propose an interactive volume rendering approach that supports advanced illumination when visualizing multimodal volumetric data sets. The presented approach has been developed with the goal to simplify and minimize per-sample operations, while at the same time reducing the memory requirements. We will show how to exploit illumination-importance metrics, to compress and transform multimodal data sets into an illumination-aware representation, which is accessed during rendering through a novel light-space-based volume rendering algorithm. Both, data transformation and rendering algorithm, are closely intervened by taking compression errors into account during rendering. We describe and analyze the presented approach in detail, and apply it to real-world multimodal data sets from biology, medicine, meteorology and engineering.

Place, publisher, year, edition, pages
Elsevier, 2015
Keywords
Volume rendering; Volumetric illumination; Multimodal visualization
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-120865 (URN)10.1016/j.cag.2015.05.004 (DOI)000358818100005 ()
Available from: 2015-08-28 Created: 2015-08-28 Last updated: 2017-12-04
Jönsson, D., Sundén, E., Ynnerman, A. & Ropinski, T. (2014). A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering. Computer graphics forum (Print), 33(1), 27-51
Open this publication in new window or tab >>A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
2014 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, no 1, p. 27-51Article in journal (Refereed) Published
Abstract [en]

Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

Place, publisher, year, edition, pages
Wiley, 2014
Keywords
volume rendering; rendering; volume visualization; visualization; illumination rendering; rendering
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-105757 (URN)10.1111/cgf.12252 (DOI)000331694100004 ()
Available from: 2014-04-07 Created: 2014-04-04 Last updated: 2017-12-05
Sundén, E., Bock, A., Jönsson, D., Ynnerman, A. & Ropinski, T. (2014). Interaction Techniques as a Communication Channel when Presenting 3D Visualizations. In: : . Paper presented at IEEE VIS International Workshop on 3DVis. IEEE
Open this publication in new window or tab >>Interaction Techniques as a Communication Channel when Presenting 3D Visualizations
Show others...
2014 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this position paper we discuss the usage of various interaction technologies with focus on the presentations of 3D visualizations involving a presenter and an audience. While an interaction technique is commonly evaluated from a user perspective, we want to shift the focus from a sole analysis of the naturalness and the ease-of-use for the user, to focus on how expressive and understandable the interaction technique is when witnessed by the audience. The interaction process itself can be considered to be a communication channel and a more expressive interaction technique might make it easier for the audience to comprehend the presentation. Thus, while some natural interaction techniques for interactive visualization are easy to perform by the presenter, they may be less beneficial when interacting with the visualization in front of (and for) an audience. Our observations indicate that the suitability of an interaction technique as a communication channel is highly dependent on the setting in which the interaction takes place. Therefore, we analyze different presentation scenarios in an exemplary fashion and discuss how beneficial and comprehensive the involved techniques are for the audience. We argue that interaction techniques complement the visualization in an interactive presentation scenario as they also serve as an important communication channel, and should therefore also be observed from an audience perspective rather than exclusively a user perspective.

Place, publisher, year, edition, pages
IEEE, 2014
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-117774 (URN)10.1109/3DVis.2014.7160102 (DOI)000412474600010 ()9781479968268 (ISBN)
Conference
IEEE VIS International Workshop on 3DVis
Available from: 2015-05-08 Created: 2015-05-08 Last updated: 2018-10-08Bibliographically approved
Bock, A., Sundén, E., Liu, B., Wuensche, B. & Ropinski, T. (2012). Coherency-Based Curve Compression for High-Order Finite Element Model Visualization. IEEE Transactions on Visualization and Computer Graphics, 18(12), 2315-2324
Open this publication in new window or tab >>Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
Show others...
2012 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2315-2324Article in journal (Refereed) Published
Abstract [en]

Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2012
Keywords
Finite element visualization, GPU-base dray-casting
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-86633 (URN)10.1109/TVCG.2012.206 (DOI)000310143100035 ()
Note

Funding Agencies|Swedish Research Council (VR)|2011-4113|Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Swedish e-Science Research Centre (SeRC)||

Available from: 2012-12-20 Created: 2012-12-20 Last updated: 2018-05-21
Jönsson, D., Sundén, E., Ynnerman, A. & Ropinski, T. (2012). State of The Art Report on Interactive Volume Rendering with Volumetric Illumination. In: Marie-Paule Cani and Fabio Ganovelli (Ed.), Eurographics 2012 - State of the Art Reports: . Paper presented at Eurographics 2012 (pp. 53-74). Eurographics - European Association for Computer Graphics
Open this publication in new window or tab >>State of The Art Report on Interactive Volume Rendering with Volumetric Illumination
2012 (English)In: Eurographics 2012 - State of the Art Reports / [ed] Marie-Paule Cani and Fabio Ganovelli, Eurographics - European Association for Computer Graphics, 2012, p. 53-74Conference paper, Oral presentation only (Other academic)
Abstract [en]

Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shadowing and scattering effects. In this article, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

Place, publisher, year, edition, pages
Eurographics - European Association for Computer Graphics, 2012
Series
Eurographics 2012 - State of the Art Reports, ISSN 1017-4656
Keywords
Volume rendering, Volume Illumination
National Category
Other Engineering and Technologies not elsewhere specified
Identifiers
urn:nbn:se:liu:diva-78321 (URN)10.2312/conf/EG2012/stars/053-074 (DOI)
Conference
Eurographics 2012
Projects
CADICSCMIV
Available from: 2012-06-08 Created: 2012-06-08 Last updated: 2017-03-17
Sundén, E., Ynnerman, A. & Ropinski, T. (2011). Image Plane Sweep Volume Illumination. IEEE Transactions on Visualization and Computer Graphics, 17(12), 2125-2134
Open this publication in new window or tab >>Image Plane Sweep Volume Illumination
2011 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 17, no 12, p. 2125-2134Article in journal (Refereed) Published
Abstract [en]

In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.

Place, publisher, year, edition, pages
IEEE, 2011
Keywords
Interactive volume rendering, GPU-based ray-casting, Advanced illumination.
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-72047 (URN)10.1109/TVCG.2011.211 (DOI)000296241900043 ()
Available from: 2011-11-17 Created: 2011-11-14 Last updated: 2017-12-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/erik.sunden@liu.se

Search in DiVA

Show all publications