liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Enhancing Salient Features in Volumetric Data Using Illumination and Transfer Functions
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. (C-Research)ORCID iD: 0000-0002-5220-633X
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The visualization of volume data is a fundamental component in the medical domain. Volume data is used in the clinical work-flow to diagnose patients and is therefore of uttermost importance. The amount of data is rapidly increasing as sensors, such as computed tomography scanners, become capable of measuring more details and gathering more data over time. Unfortunately, the increasing amount of data makes it computationally challenging to interactively apply high quality methods to increase shape and depth perception. Furthermore, methods for exploring volume data has mostly been designed for experts, which prohibits novice users from exploring volume data. This thesis aims to address these challenges by introducing efficient methods for enhancing salient features through high quality illumination as well as methods for intuitive volume data exploration.

Humans are interpreting the world around them by observing how light interacts with objects. Shadows enable us to better determine distances while shifts in color enable us to better distinguish objects and identify their shape. These concepts are also applicable to computer generated content. The perception in volume data visualization can therefore be improved by simulating real-world light interaction. This thesis presents efficient methods that are capable of interactively simulating realistic light propagation in volume data. In particular, this work shows how a multi-resolution grid can be used to encode the attenuation of light from all directions using spherical harmonics and thereby enable advanced interactive dynamic light configurations. Two methods are also presented that allow photon mapping calculations to be focused on visually changing areas.The results demonstrate that photon mapping can be used in interactive volume visualization for both static and time-varying volume data.

Efficient and intuitive exploration of volume data requires methods that are easy to use and reflect the objects that were measured. A value that has been collected by a sensor commonly represents the material existing within a small neighborhood around a location. Recreating the original materials is difficult since the value represents a mixture of them. This is referred to as the partial-volume problem. A method is presented that derives knowledge from the user in order to reconstruct the original materials in a way which is more in line with what the user would expect. Sharp boundaries are visualized where the certainty is high while uncertain areas are visualized with fuzzy boundaries. The volume exploration process of mapping data values to optical properties through the transfer function has traditionally been complex and performed by expert users. A study at a science center showed that visitors favor the presented dynamic gallery method compared to the most commonly used transfer function editor.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2016. , 61 p.
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1789
National Category
Media and Communication Technology Computer Science Media Engineering Other Computer and Information Science
Identifiers
URN: urn:nbn:se:liu:diva-131023DOI: 10.3384/diss.diva-131023ISBN: 9789176856895 (print)OAI: oai:DiVA.org:liu-131023DiVA: diva2:971687
Public defence
2016-10-21, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:30 (English)
Opponent
Supervisors
Available from: 2016-10-04 Created: 2016-09-05 Last updated: 2016-10-04Bibliographically approved
List of papers
1. A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
Open this publication in new window or tab >>A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
2014 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, no 1, 27-51 p.Article in journal (Refereed) Published
Abstract [en]

Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

Place, publisher, year, edition, pages
Wiley, 2014
Keyword
volume rendering; rendering; volume visualization; visualization; illumination rendering; rendering
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-105757 (URN)10.1111/cgf.12252 (DOI)000331694100004 ()
Available from: 2014-04-07 Created: 2014-04-04 Last updated: 2017-12-05
2. Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -
Open this publication in new window or tab >>Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -
Show others...
2012 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 3, 447-462 p.Article in journal (Refereed) Published
Abstract [sv]

We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

Place, publisher, year, edition, pages
IEEE, 2012
Keyword
Volumetric Illumination, Precomputed Radiance Transfer, Volume Rendering
National Category
Other Computer and Information Science
Identifiers
urn:nbn:se:liu:diva-66839 (URN)10.1109/TVCG.2011.35 (DOI)000299281700010 ()
Projects
CADICSMOVIII
Note
©2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Joel Kronander, Daniel Jönsson, Joakim Löw, Patric Ljung, Anders Ynnerman and Jonas Unger, Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering, 2011, IEEE Transactions on Visualization and Computer Graphics. http://dx.doi.org/10.1109/TVCG.2011.35 Available from: 2011-03-24 Created: 2011-03-21 Last updated: 2017-12-11Bibliographically approved
3. Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
Open this publication in new window or tab >>Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
2012 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, 2364-2371 p.Article in journal (Refereed) Published
Abstract [en]

In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2012
Keyword
Volume rendering, photon mapping, global illumination, participating media
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-86634 (URN)10.1109/TVCG.2012.232 (DOI)000310143100040 ()
Projects
CADICSCMIV
Note

Funding Agencies|Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Swedish e-Science Research Centre (SeRC)||

Available from: 2012-12-20 Created: 2012-12-20 Last updated: 2017-12-06
4. Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
Open this publication in new window or tab >>Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
2017 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 23, no 1, 901-910 p.Article in journal (Refereed) Published
Abstract [en]

We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Keyword
Volume rendering, photon mapping, global illumination, participating media
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-131022 (URN)10.1109/TVCG.2016.2598430 (DOI)000395537600093 ()27514045 (PubMedID)2-s2.0-84999158356 (Scopus ID)
Projects
SERCCMIV
Note

Funding Agencies|Swedish e-Science Research Centre (SeRC)||Swedish Research Council (VR) grant 2016-05462||Knut and Alice Wallenberg Foundation (KAW) grant 2016-0076||

Available from: 2016-09-05 Created: 2016-09-05 Last updated: 2017-04-20Bibliographically approved
5. Boundary Aware Reconstruction of Scalar Fields
Open this publication in new window or tab >>Boundary Aware Reconstruction of Scalar Fields
2014 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, no 12, 2447-2455 p.Article in journal (Refereed) Published
Abstract [en]

In visualization, the combined role of data reconstruction and its classification plays a crucial role. In this paper we propose a novel approach that improves classification of different materials and their boundaries by combining information from the classifiers at the reconstruction stage. Our approach estimates the targeted materials’ local support before performing multiple material-specific reconstructions that prevent much of the misclassification traditionally associated with transitional regions and transfer function (TF) design. With respect to previously published methods our approach offers a number of improvements and advantages. For one, it does not rely on TFs acting on derivative expressions, therefore it is less sensitive to noisy data and the classification of a single material does not depend on specialized TF widgets or specifying regions in a multidimensional TF. Additionally, improved classification is attained without increasing TF dimensionality, which promotes scalability to multivariate data. These aspects are also key in maintaining low interaction complexity. The results are simple-to-achieve visualizations that better comply with the user’s understanding of discrete features within the studied object.

Place, publisher, year, edition, pages
IEEE Press, 2014
National Category
Computer and Information Science Computer Science
Identifiers
urn:nbn:se:liu:diva-110227 (URN)10.1109/TVCG.2014.2346351 (DOI)000344991700090 ()
Available from: 2014-09-04 Created: 2014-09-04 Last updated: 2017-12-05Bibliographically approved
6. Intuitive Exploration of Volumetric Data Using Dynamic Galleries
Open this publication in new window or tab >>Intuitive Exploration of Volumetric Data Using Dynamic Galleries
2016 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 22, no 1, 896-905 p.Article in journal (Refereed) Published
Abstract [en]

In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC, 2016
Keyword
Transfer function; scalar fields; volume rendering; touch interaction; visualization; user interfaces
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-123054 (URN)10.1109/TVCG.2015.2467294 (DOI)000364043400095 ()26390481 (PubMedID)
Note

Funding Agencies|Swedish Research Council, VR [2011-5816]; Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Linnaeus Environment CADICS; Swedish e-Science Research Centre (SeRC)

Available from: 2015-12-04 Created: 2015-12-03 Last updated: 2017-12-01

Open Access in DiVA

Enhancing Salient Features in Volumetric Data Using Illumination and Transfer Functions(2275 kB)198 downloads
File information
File name FULLTEXT01.pdfFile size 2275 kBChecksum SHA-512
631eb21105d42a3ae1a0904667eaf7b46208eb9a6abed9cbcdbbdc8bc38fd251bb9a3548f158fef17ad82fcc1430f2f86346406bc0b64e678301f2f7fbf4dce1
Type fulltextMimetype application/pdf
omslag(2814 kB)28 downloads
File information
File name COVER01.pdfFile size 2814 kBChecksum SHA-512
7ff1ab64290e0eac3d9521d8af8bae59378633ea46dfc5f83878cb0d8dcf90e0fb6da4f7c192b21ae276b8f7a350883fc92d96b2228e12fb2571acf8826416c8
Type coverMimetype application/pdf

Other links

Publisher's full text

Search in DiVA

By author/editor
Jönsson, Daniel
By organisation
Media and Information TechnologyFaculty of Science & Engineering
Media and Communication TechnologyComputer ScienceMedia EngineeringOther Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 198 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 3204 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf