liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Concurrent Volume Visualization of Real-Time fMRI
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
Show others and affiliations
2010 (English)In: Proceedings of the 8th IEEE/EG International Symposium on Volume Graphics / [ed] Ruediger Westermann and Gordon Kindlmann, Goslar, Germany: Eurographics - European Association for Computer Graphics, 2010, 53-60 p.Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel approach to interactive and concurrent volume visualization of functional Magnetic Resonance Imaging (fMRI). While the patient is in the scanner, data is extracted in real-time using state-of-the-art signal processing techniques. The fMRI signal is treated as light emission when rendering a patient-specific high resolution reference MRI volume, obtained at the beginning of the experiment. As a result, the brain glows and emits light from active regions. The low resolution fMRI signal is thus effectively fused with the reference brain with the current transfer function settings yielding an effective focus and context visualization. The delay from a change in the fMRI signal to the visualization is approximately 2 seconds. The advantage of our method over standard 2D slice based methods is shown in a user study. We demonstrate our technique through experiments providing interactive visualization to the fMRI operator and also to the test subject in the scanner through a head mounted display.

Place, publisher, year, edition, pages
Goslar, Germany: Eurographics - European Association for Computer Graphics, 2010. 53-60 p.
Series
Eurographics/IEEE VGTC Symposium on Volume Graphics, ISSN 1727-8376 ; VG10
Keyword [en]
fMRI, Direct volume rendering, Local ambient occlusion, Real-time, Biofeedback
National Category
Medical Image Processing
Identifiers
URN: urn:nbn:se:liu:diva-58060DOI: 10.2312/VG/VG10/053-060ISBN: 978-3-905674-23-1 (print)OAI: oai:DiVA.org:liu-58060DiVA: diva2:331859
Conference
8th IEEE/EG International Symposium on Volume Graphics, Norrköping, Sweden, 2-3 May, 2010
Projects
CADICS
Available from: 2010-07-27 Created: 2010-07-27 Last updated: 2015-11-04Bibliographically approved
In thesis
1. Efficient Methods for Volumetric Illumination
Open this publication in new window or tab >>Efficient Methods for Volumetric Illumination
2011 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Modern imaging modalities can generate three-dimensional datasets with a very high detail level. To transfer all the information to the user in an efficient way there is a need for three-dimensional visualization. In order to enhance the diagnostic capabilities the utilized methods must supply the user with fast renderings that are easy to interpret correctly.

It can thus be a challenge to visualize a three-dimensional dataset in a way that allows the user to perceive depth and shapes. A number of stereoscopic solutions are available on the market but it is in many situations more practical and less expensive to use ordinary two-dimensional displays. Incorporation of advanced illumination can, however, improve the perception of depth in a rendering of a volume. Cast shadows provide the user with clues of distances and object hierarchy. Simulating realistic light conditions is, however, complex and it can be difficult to reach interactive frame rates. Approximations and clever implementations are consequently required.

This thesis presents efficient methods for calculation of illumination with the objective of providing the user with high spatial and shape perception. Two main types of light conditions, a single point light source and omni-directional illumination, are considered. Global transport of light is efficiently estimated using local piecewise integration which allows a graceful speed up compared to brute force techniques. Ambient light conditions are calculated by integrating the incident light along rays within a local neighborhood around each point in the volume.

Furthermore, an approach that allows the user to highlight different tissues, using luminous materials, is also available in this thesis. A multiresolution data structure is employed in all the presented methods in order to support evaluation of illumination for large scale data at interactive frame rates.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2011. 59 p.
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1406
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-71460 (URN)978-91-7393-041-3 (ISBN)
Public defence
2011-11-25, Wrannesalen, Center for Medical Image Science and Visualization, Campus US, Linköpings universitet, Linköping, 13:00 (English)
Opponent
Supervisors
Available from: 2011-10-19 Created: 2011-10-19 Last updated: 2015-09-22Bibliographically approved
2. Supporting Quantitative Visual Analysis in Medicine and Biology in the Presence of Data Uncertainty
Open this publication in new window or tab >>Supporting Quantitative Visual Analysis in Medicine and Biology in the Presence of Data Uncertainty
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The advents of technologies have led to tremendous increases in the diversity and size of the available data. In the field of medicine, the advancements in medical imaging technologies have dramatically improved the quality of the acquired data, such as a higher resolution and higher signal-to-noise ratio. In addition, the dramatic reduction of the acquisition time has enabled the studies of organs under function.At the same pace, the progresses in the field of biology and bioinformatics have led to stable automatic algorithms for the generation of biological data. As the amount of the available data and the complexity increase, there have been great demands on efficient analysis and visualization techniques to support quantitative visual analysis of the huge amount of data that we are facing.

This thesis aims at supporting quantitative visual analysis in the presence of data uncertainty within the context of medicine and biology. In this thesis, we present several novel analysis techniques and visual representations to achieve these goals. The results presented in this thesis cover a wide range of applications, which reflects the interdisciplinary nature of scientific visualization, as visualization is not for the sake of visualization. The advances in visualization enable the advances in other fields.

In typical clinical applications or research scenarios, it is common to have data from different modalities. By combining the information from these data sources, we can achieve better quantitative analysis as well as visualization. Nevertheless, there are many challenges involved along the process such as the co-registration, differences in resolution, and signal-to-noise ratio. We propose a novel approach that uses light as an information transporter to address the challenges involved when dealing with multimodal data.

When dealing with dynamic data, it is essential to identify features of interest across the time steps to support quantitative analyses. However, this is a time-consuming process and is prone to inconsistencies and errors. To address this issue, we propose a novel technique that enables an automatic tracking of identified features of interest across time steps in dynamic datasets.

Although technological advances improve the accuracy of the acquired data, there are other sources of uncertainty that need to be taken into account. In this thesis, we propose a novel approach to fuse the derived uncertainty from different sophisticated algorithms in order to achieve a new set of outputs with a lower level of uncertainty. In addition, we also propose a novel visual representation that not only supports comparative visualization, but also conveys the uncertainty in the parameters of a complex system.

Over past years, we have witnessed the rapid growth of available data in the field of biology. The sequence alignments of the top 20 protein domains and families have a large number of sequences, ranging from more than 70,000 to approximately 400,000 sequences. Consequently, it is difficult to convey features using the traditional representation. In this thesis, we propose a novel representation that facilitates the identification of gross trend patterns and variations in large-scale sequence alignment data.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2014. 146 p.
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1569
National Category
Computer Science
Identifiers
urn:nbn:se:liu:diva-103799 (URN)10.3384/diss.diva-103799 (DOI)978-91-7519-415-8 (ISBN)
Public defence
2014-03-07, Dome, Visualization Center, Kungsgatan 54, Norrköping, 10:00 (English)
Opponent
Supervisors
Note

The ISBN 978-91-7519-514-8 on the title page is incorrect. The correct ISBN is 978-91-7519-415-8.

Available from: 2014-01-28 Created: 2014-01-27 Last updated: 2015-09-22Bibliographically approved

Open Access in DiVA

fulltext(4474 kB)132 downloads
File information
File name FULLTEXT01.pdfFile size 4474 kBChecksum SHA-512
ac4e842b981664bf8b47be42bf192aaa672e353a1044684e6d1ce7aaa3bbe0eab859a5fb569dae030ebfb9b133360931b359bcb44c981b25ebe6b9ac93f23e5a
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records BETA

Nguyen, Tan KhoaOhlsson, HenrikEklund, AndersHernell, FridaLjung, PatricForsell, CamillaAndersson, MatsKnutsson, HansYnnerman, Anders

Search in DiVA

By author/editor
Nguyen, Tan KhoaOhlsson, HenrikEklund, AndersHernell, FridaLjung, PatricForsell, CamillaAndersson, MatsKnutsson, HansYnnerman, Anders
By organisation
Media and Information TechnologyThe Institute of TechnologyCenter for Medical Image Science and Visualization (CMIV)Automatic ControlMedical Informatics
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 132 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1334 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf