liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1 - 30 av 30
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Knutsson, Alex
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Unnebäck, Jakob
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Eilertsen, Gabriel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    CDF-Based Importance Sampling and Visualization for Neural Network Training2023Ingår i: Eurographics Workshop on Visual Computing for Biology and Medicine / [ed] Thomas Höllt and Daniel Jönsson, 2023Konferensbidrag (Refereegranskat)
    Abstract [en]

    Training a deep neural network is computationally expensive, but achieving the same network performance with less computation is possible if the training data is carefully chosen. However, selecting input samples during training is challenging as their true importance for the optimization is unknown. Furthermore, evaluation of the importance of individual samples must be computationally efficient and unbiased. In this paper, we present a new input data importance sampling strategy for reducing the training time of deep neural networks. We investigate different importance metrics that can be efficiently retrieved as they are available during training, i.e., the training loss and gradient norm. We found that choosing only samples with large loss or gradient norm, which are hard for the network to learn, is not optimal for the network performance. Instead, we introduce an importance sampling strategy that selects samples based on the cumulative distribution function of the loss and gradient norm, thereby making it more likely to choose hard samples while still including easy ones. The behavior of the proposed strategy is first analyzed on a synthetic dataset, and then evaluated in the application of classification of malignant cancer in digital pathology image patches. As pathology images contain many repetitive patterns, there could be significant gains in focusing on features that contribute stronger to the optimization. Finally, we show how the importance sampling process can be used to gain insights about the input data through visualization of samples that are found most or least useful for the training.

    Ladda ner fulltext (pdf)
    fulltext
  • 2.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Schön, Thomas
    Uppsala university, Sweden.
    Wrenninge, Magnus
    Department of Science and Technology, Pixar Animation Studios, 512174 Emeryville, California, United States.
    Direct Transmittance Estimation in Heterogeneous Participating Media Using Approximated Taylor Expansions2022Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, nr 7, s. 2602-2614Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Evaluating the transmittance between two points along a ray is a key component in solving the light transport through heterogeneous participating media and entails computing an intractable exponential of the integrated medium's extinction coefficient. While algorithms for estimating this transmittance exist, there is a lack of theoretical knowledge about their behaviour, which also prevent new theoretically sound algorithms from being developed. For this purpose, we introduce a new class of unbiased transmittance estimators based on random sampling or truncation of a Taylor expansion of the exponential function. In contrast to classical tracking algorithms, these estimators are non-analogous to the physical light transport process and directly sample the underlying extinction function without performing incremental advancement. We present several versions of the new class of estimators, based on either importance sampling or Russian roulette to provide finite unbiased estimators of the infinite Taylor series expansion. We also show that the well known ratio tracking algorithm can be seen as a special case of the new class of estimators. Lastly, we conduct performance evaluations on both the central processing unit (CPU) and the graphics processing unit (GPU), and the results demonstrate that the new algorithms outperform traditional algorithms for heterogeneous mediums.

    Ladda ner fulltext (pdf)
    fulltext
  • 3.
    Rasheed, Farhan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Nilsson, Emma
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Masood, Talha Bin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Subject-Specific Brain Activity Analysis in fMRI Data Using Merge Trees2022Ingår i: 2022 IEEE WORKSHOP ON TOPOLOGICAL DATA ANALYSIS AND VISUALIZATION (TOPOINVIS 2022), IEEE , 2022, s. 113-123Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a method for detecting patterns in time-varying functional magnetic resonance imaging (fMRI) data based on topological analysis. The oxygenated blood flow measured by fMRI is widely used as an indicator of brain activity. The signal is, however, prone to noise from various sources. Random brain activity, physiological noise, and noise from the scanner can reach a strength comparable to the signal itself. Thus, extracting the underlying signal is a challenging process typically approached by applying statistical methods. The goal of this work is to investigate the possibilities of recovering information from the signal using topological feature vectors directly based on the raw signal without medical domain priors. We utilize merge trees to define a robust feature vector capturing key features within a time step of fMRI data. We demonstrate how such a concise feature vector representation can be utilized for exploring the temporal development of brain activations, connectivity between these activations, and their relation to cognitive tasks.

  • 4.
    Baeuerle, A.
    et al.
    Ulm Univ, Germany.
    van Onzenoodt, C.
    Ulm Univ, Germany.
    der Kinderen, S.
    Ulm Univ, Germany.
    Johansson Westberg, Jimmy
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Ulm Univ, Germany.
    Ropinski, T.
    Ulm Univ, Germany.
    Where did my Lines go? Visualizing Missing Data in Parallel Coordinates2022Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 41, nr 3, s. 235-246Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We evaluate visualization concepts to represent missing values in parallel coordinates. We focus on the trade-off between the ability to perceive missing values and the concepts impact on common tasks. For this purpose, we identified three missing value representation concepts: removing line segments where values are missing, adding a separate, horizontal axis onto which missing values are projected, and using imputed values as a replacement for missing values. For the missing values axis and imputed values concepts, we additionally add downplay and highlight variations. We performed a crowd-sourced, quantitative user study with 732 participants comparing the concepts and their variations using five real-world datasets. Based on our findings, we provide suggestions regarding which visual encoding to employ depending on the task at focus.

    Ladda ner fulltext (pdf)
    fulltext
  • 5.
    Ynnerman, Anders
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ljung, Patric
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Persson, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Multi-Touch Surfaces and Patient-Specific Data2021Ingår i: Digital Anatomy: Applications of Virtual, Mixed and Augmented Reality / [ed] Jean-François Uhl, Joaquim Jorge, Daniel Simões Lopes, Pedro F. Campos, Springer, 2021, 1, s. 223-242Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    While the usefulness of 3D visualizations has been shown for a range of clinical applications such as treatment planning it still had difficulties in being adopted in widespread clinical practice. This chapter describes how multi-touch surfaces with patient-specific data have contributed to breaking this barrier, paving the way for adoption into clinical practice and, at the same time, also found widespread use in educational settings and in communication of science to the general public. The key element identified for this adoption is the string of steps found in the full imaging chain, which will be described as an introduction to the topic in this chapter. Emphasis in the chapter is, however, visualization aspects, e.g., intuitive interaction with patient-specific data captured with the latest high speed and high-quality imaging modalities. A necessary starting point for this discussion is the foundations of and state-of-the-art in volumetric rendering, which form the basis for the underlying theory part of the chapter. The chapter presents two use cases. One case is focusing on the use of multi-touch in medical education and the other is focusing on the use of touch surfaces at public venues, such as science centers and museums. 

  • 6.
    Eilertsen, Gabriel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ropinski, Timo
    Institute of Media Informatics, Ulm University, Germany.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Classifying the classifier: dissecting the weight space of neural networks2020Ingår i: Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020) / [ed] Giuseppe De Giacomo, Alejandro Catala, Bistra Dilkina, Michela Milano, Senén Barro, Alberto Bugarín, Jérôme Lang, IOS PRESS , 2020, Vol. 325, s. 8s. 1119-1126, artikel-id FAIA200209Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an empirical study on the weights of neural networks, where we interpret each model as a point in a high-dimensional space – the neural weight space. To explore the complex structure of this space, we sample from a diverse selection of training variations (dataset, optimization procedure, architecture,etc.) of neural network classifiers, and train a large number of models to represent the weight space. Then, we use a machine learning approach for analyzing and extracting information from this space. Most centrally, we train a number of novel deep meta-classifiers withthe objective of classifying different properties of the training setup by identifying their footprints in the weight space. Thus, the meta-classifiers probe for patterns induced by hyper-parameters, so that we can quantify how much, where, and when these are encoded through the optimization process. This provides a novel and complementary view for explainable AI, and we show how meta-classifiers can reveal a great deal of information about the training setup and optimization, by only considering a small subset of randomly selected consecutive weights. To promote further research on the weight space, we release the neural weight space (NWS) dataset – a collection of 320K weightsnapshots from 16K individually trained deep neural networks.

  • 7.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Steneteg, Peter
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Sundén, Erik
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Englund, Rickard
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kottravel, Sathish
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Falk, Martin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Inviwo - A Visualization System with Usage Abstraction Levels2020Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, Vol. 26, nr 11, s. 3241-3254Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The complexity of todays visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for insta

    Ladda ner fulltext (pdf)
    fulltext
  • 8.
    Jankowai, Jochen
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Medicinska fakulteten.
    Skånberg, Robin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Tensor volume exploration using attribute space representatives2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    While volume rendering for scalar fields has been advanced into a powerful visualisation method, similar volumetric representations for tensor fields are still rare. The complexity of the data challenges not only the rendering but also the design of the transfer function. In this paper we propose an interface using glyph widgets to design a transfer function for the rendering of tensor data sets. Thereby the transfer function (TF) controls a volume rendering which represents sought after tensor-features and a texture that conveys directional information. The basis of the design interface is a two-dimensional projection of the attribute space. Characteristic representatives in the form of glyphs support an intuitive navigation through the attribute space. We provide three different options to select the representatives: automatic selection based on attribute space clustering, uniform sampling of the attribute space, or manually selected representatives. In contrast to glyphs placed into the 3D volume, we use glyphs with complex geometry as widgets to control the shape and extent of the representatives. In the final rendering the glyphs with their assigned colors play a similar role as a legend in an atlas like representation. The method provides an overview of the tensor field in the 3D volume at the same time as it allows the user to explore the tensor field in an attribute space. We demonstrate the flexibility of our approach on tensor fields for selected data sets with very different characteristics.

  • 9.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Eilertsen, Gabriel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Shi, Hezi
    Nanyang Technological University, Institute for Media Innovation, Singapore.
    Jianmin, Zheng
    Nanyang Technological University, Institute for Media Innovation, Singapore.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Visual Analysis of the Impact of Neural Network Hyper-Parameters2020Ingår i: Machine Learning Methods in Visualisation for Big Data 2020 / [ed] Archambault, Daniel, Nabney, Ian, Peltonen, Jaakko, Eurographics - European Association for Computer Graphics, 2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an analysis of the impact of hyper-parameters for an ensemble of neural networks using tailored visualization techniques to understand the complicated relationship between hyper-parameters and model performance. The high-dimensional error surface spanned by the wide range of hyper-parameters used to specify and optimize neural networks is difficult to characterize - it is non-convex and discontinuous, and there could be complex local dependencies between hyper-parameters. To explore these dependencies, we make use of a large number of sampled relations between hyper-parameters and end performance, retrieved from thousands of individually trained convolutional neural network classifiers. We use a structured selection of visualization techniques to analyze the impact of different combinations of hyper-parameters. The results reveal how complicated dependencies between hyper-parameters influence the end performance, demonstrating how the complete picture painted by considering a large number of trainings simultaneously can aid in understanding the impact of hyper-parameter combinations.

    Ladda ner fulltext (pdf)
    fulltext
  • 10.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Bergström, Albin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Forsell, Camilla
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Simon, Rozalyn
    Linköpings universitet, Institutionen för hälsa, medicin och vård, Avdelningen för diagnostik och specialistmedicin. Linköpings universitet, Medicinska fakulteten.
    Engström, Maria
    Linköpings universitet, Institutionen för hälsa, medicin och vård, Avdelningen för diagnostik och specialistmedicin. Linköpings universitet, Medicinska fakulteten.
    Walter, Susanna
    Linköpings universitet, Institutionen för biomedicinska och kliniska vetenskaper, Avdelningen för inflammation och infektion. Linköpings universitet, Medicinska fakulteten. Region Östergötland, Centrum för kirurgi, ortopedi och cancervård, Mag- tarmmedicinska kliniken.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    VisualNeuro: A Hypothesis Formation and Reasoning Application for Multi-Variate Brain Cohort Study Data2020Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, nr 6, s. 392-407Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an application, and its development process, for interactive visual analysis of brain imaging data and clinical measurements. The application targets neuroscientists interested in understanding the correlations between active brain regions and physiological or psychological factors. The application has been developed in a participatory design process and has subsequently been released as the free software VisualNeuro. From initial observations of the neuroscientists workflow, we concluded that while existing tools provide powerful analysis options, they lack effective interactive exploration requiring the use of many tools side by side. Consequently, our application has been designed to simplify the workflow combining statistical analysis with interactive visual exploration. The resulting environment comprises parallel coordinates for effective overview and selection, Welchs t-test to filter out brain regions with statistically significant differences and multiple visualizations for comparison between brain regions and clinical parameters. These exploration concepts enable neuroscientists to interactively explore the complex bidirectional interplay between clinical and brain measurements and easily compare different patient groups. A qualitative user study has been performed with three neuroscientists from different domains. The study shows that the developed environment supports simultaneous analysis of more parameters, provides rapid pathways to insights and is an effective tool for hypothesis formation.

    Ladda ner fulltext (pdf)
    fulltext
  • 11.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Bergström, Albin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Forsell, Camilla
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Simon, Rozalyn
    Linköpings universitet, Institutionen för medicin och hälsa, Avdelningen för radiologiska vetenskaper. Linköpings universitet, Medicinska fakulteten.
    Engström, Maria
    Linköpings universitet, Institutionen för medicin och hälsa, Avdelningen för radiologiska vetenskaper. Linköpings universitet, Medicinska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    A Visual Environment for Hypothesis Formation and Reasoning in Studies with fMRI and Multivariate Clinical Data2019Ingår i: Eurographics Workshop on Visual Computing for Biology and Medicine, 2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an interactive visual environment for linked analysis of brain imaging and clinical measurements. The environment is developed in an iterative participatory design process involving neuroscientists investigating the causes of brain-related complex diseases. The hypotheses formation process about correlations between active brain regions and physiological or psychological factors in studies with hundreds of subjects is a central part of the investigation. Observing the reasoning patterns during hypotheses formation, we concluded that while existing tools provide powerful analysis options, they lack effective interactive exploration, thus limiting the scientific scope and preventing extraction of knowledge from available data.Based on these observations, we designed methods that support neuroscientists by integrating their existing statistical analysis of multivariate subject data with interactive visual explorationto enable them to better understand differences between patient groups and the complex bidirectional interplay between clinical measurement and the brain. These exploration concepts enable neuroscientists, for the first time during their investigations, to interactively move between and reason about questions such as ‘which clinical measurements are correlated with a specific brain region?’ or ‘are there differences in brain activity between depressed young and old subjects?’. The environment uses parallel coordinates for effective overview and selection of subject groups, Welch's t-test to filter out brain regions with statistically significant differences, and multiple visualizations of Pearson correlations between brain regions and clinical parameters to facilitate correlation analysis. A qualitative user study was performed with three neuroscientists from different domains. The study shows that the developed environment supports simultaneous analysis of more parameters, provides rapid pathways to insights, and is an effective support tool for hypothesis formation.

    Ladda ner fulltext (pdf)
    fulltext
  • 12.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Bergström, Albin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Algström, Isac
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Simon, Rozalyn
    Linköpings universitet, Institutionen för medicin och hälsa, Avdelningen för radiologiska vetenskaper. Linköpings universitet, Medicinska fakulteten.
    Engström, Maria
    Linköpings universitet, Institutionen för medicin och hälsa, Avdelningen för radiologiska vetenskaper. Linköpings universitet, Medicinska fakulteten.
    Walter, Susanna
    Linköpings universitet, Institutionen för klinisk och experimentell medicin, Avdelningen för neuro- och inflammationsvetenskap. Linköpings universitet, Medicinska fakulteten. Region Östergötland, Hjärt- och Medicincentrum, Magtarmmedicinska kliniken.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Visual analysis for understanding irritable bowel syndrome2019Ingår i: Biomedical visualisation / [ed] Paul Rea, Cham: Springer, 2019, s. 111-122Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    The cause of irritable bowel syndrome (IBS), a chronic disorder characterized by abdominal pain and disturbed bowel habits, is largely unknown. It is believed to be related to physical properties in the gut, central mechanisms in the brain, psychological factors, or a combination of these. To understand the relationships within the gut-brain axis with respect to IBS, large numbers of measurements ranging from stool samples to functional magnetic resonance imaging are collected from patients with IBS and healthy controls. As such, IBS is a typical example in medical research where research turns into a big data analysis challenge. In this chapter we demonstrate the power of interactive visual data analysis and exploration to generate an environment for scientific reasoning and hypothesis formulation for data from multiple sources with different character. Three case studies are presented to show the utility of the presented work.

  • 13.
    Steneteg, Peter
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Falk, Martin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Volume Raycasting Sampling Revisited2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    We investigate the effects of practical sample placement strategies when solving the volume rendering integral for interactive volume raycasting with fixed step lengths for each ray. Different sample placements have been used in previous work but they have not been compared with respect to their correctness or visual quality. In this work, the different sampling strategies are presented visually and practical implementation details are provided using algorithmic descriptions of each strategy. A thorough analysis based on comparisons with analytic solutions and real-world data shows that visual artifacts, especially at volume borders, can appear if samples are not placed correctly. Our analysis and comparison results in a sample placement strategy that easily can be integrated into existing implementations, has no impact on performance, and decreases visual artifacts of the rendered image compared to other fixed step size sample strategies.

  • 14.
    Skånberg, Robin
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    König, Carolin
    Division of Theoretical Chemistry and Biology, KTH Royal Institute of Technology, Sweden.
    Norman, Patrick
    Division of Theoretical Chemistry and Biology, KTH Royal Institute of Technology, Sweden.
    Linares, Mathieu
    Division of Theoretical Chemistry and Biology, KTH Royal Institute of Technology, Sweden.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hotz, Ingrid
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    VIA-MD: Visual Interactive Analysis of Molecular Dynamics2018Ingår i: Workshop on Molecular Graphics and Visual Analysis of Molecular Data, Eurographics - European Association for Computer Graphics, 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a visual exploration environment tailored for large-scale spatio-temporal molecular dynamics simulation data. The environment is referred to as VIA-MD (visual interactive analysis of molecular dynamics) and has been developed in a participatory design process with domain experts on molecular dynamics simulations of complex molecular systems. A key feature of our approach is the support for linked interactive 3D exploration of geometry and statistical analysis using dynamic temporal windowing and animation. Based on semantic level descriptions and hierarchical aggregation of molecular properties we enable interactive filtering, which enables the user to effectively find spatial, temporal and statistical patterns. The VIA-MD environment provides an unprecedented tool for analysis of complex microscopic interactions hidden in large data volumes. We demonstrate the utility of the VIA-MD environment with four use cases. The first two deal with simulation of amyloid plaque associated with development of Alzheimer's, and we study an aqueous solution of 100 probes and an amyloid fibril. The identification of interaction "hotspots" is achieved with the use of combined filter parameters connected with probe molecular planarity and probe-fibril interaction energetics. The third and fourth examples show the wide applicability of the environment by applying it to analysis of molecular properties in material design.

  • 15.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Anders, Ynnerman
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data2017Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 23, nr 1, s. 901-910Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

    Ladda ner fulltext (pdf)
    fulltext
  • 16.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Sundén, Erik
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Läthén, Gunnar
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik.
    W. Hachette, Isabelle
    Method and system for volume rendering of medical images2017Patent (Övrig (populärvetenskap, debatt, mm))
  • 17. Beställ onlineKöp publikationen >>
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Enhancing Salient Features in Volumetric Data Using Illumination and Transfer Functions2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The visualization of volume data is a fundamental component in the medical domain. Volume data is used in the clinical work-flow to diagnose patients and is therefore of uttermost importance. The amount of data is rapidly increasing as sensors, such as computed tomography scanners, become capable of measuring more details and gathering more data over time. Unfortunately, the increasing amount of data makes it computationally challenging to interactively apply high quality methods to increase shape and depth perception. Furthermore, methods for exploring volume data has mostly been designed for experts, which prohibits novice users from exploring volume data. This thesis aims to address these challenges by introducing efficient methods for enhancing salient features through high quality illumination as well as methods for intuitive volume data exploration.

    Humans are interpreting the world around them by observing how light interacts with objects. Shadows enable us to better determine distances while shifts in color enable us to better distinguish objects and identify their shape. These concepts are also applicable to computer generated content. The perception in volume data visualization can therefore be improved by simulating real-world light interaction. This thesis presents efficient methods that are capable of interactively simulating realistic light propagation in volume data. In particular, this work shows how a multi-resolution grid can be used to encode the attenuation of light from all directions using spherical harmonics and thereby enable advanced interactive dynamic light configurations. Two methods are also presented that allow photon mapping calculations to be focused on visually changing areas.The results demonstrate that photon mapping can be used in interactive volume visualization for both static and time-varying volume data.

    Efficient and intuitive exploration of volume data requires methods that are easy to use and reflect the objects that were measured. A value that has been collected by a sensor commonly represents the material existing within a small neighborhood around a location. Recreating the original materials is difficult since the value represents a mixture of them. This is referred to as the partial-volume problem. A method is presented that derives knowledge from the user in order to reconstruct the original materials in a way which is more in line with what the user would expect. Sharp boundaries are visualized where the certainty is high while uncertain areas are visualized with fuzzy boundaries. The volume exploration process of mapping data values to optical properties through the transfer function has traditionally been complex and performed by expert users. A study at a science center showed that visitors favor the presented dynamic gallery method compared to the most commonly used transfer function editor.

    Delarbeten
    1. A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
    Öppna denna publikation i ny flik eller fönster >>A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
    2014 (Engelska)Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, nr 1, s. 27-51Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

    Ort, förlag, år, upplaga, sidor
    Wiley, 2014
    Nyckelord
    volume rendering; rendering; volume visualization; visualization; illumination rendering; rendering
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-105757 (URN)10.1111/cgf.12252 (DOI)000331694100004 ()
    Tillgänglig från: 2014-04-07 Skapad: 2014-04-04 Senast uppdaterad: 2017-12-05
    2. Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -
    Öppna denna publikation i ny flik eller fönster >>Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -
    Visa övriga...
    2012 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 3, s. 447-462Artikel i tidskrift (Refereegranskat) Published
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2012
    Nyckelord
    Volumetric Illumination, Precomputed Radiance Transfer, Volume Rendering
    Nationell ämneskategori
    Annan data- och informationsvetenskap
    Identifikatorer
    urn:nbn:se:liu:diva-66839 (URN)10.1109/TVCG.2011.35 (DOI)000299281700010 ()
    Projekt
    CADICSMOVIII
    Anmärkning
    ©2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Joel Kronander, Daniel Jönsson, Joakim Löw, Patric Ljung, Anders Ynnerman and Jonas Unger, Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering, 2011, IEEE Transactions on Visualization and Computer Graphics. http://dx.doi.org/10.1109/TVCG.2011.35 Tillgänglig från: 2011-03-24 Skapad: 2011-03-21 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    3. Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
    Öppna denna publikation i ny flik eller fönster >>Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
    2012 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 12, s. 2364-2371Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

    Ort, förlag, år, upplaga, sidor
    Institute of Electrical and Electronics Engineers (IEEE), 2012
    Nyckelord
    Volume rendering, photon mapping, global illumination, participating media
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-86634 (URN)10.1109/TVCG.2012.232 (DOI)000310143100040 ()
    Projekt
    CADICSCMIV
    Anmärkning

    Funding Agencies|Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Swedish e-Science Research Centre (SeRC)||

    Tillgänglig från: 2012-12-20 Skapad: 2012-12-20 Senast uppdaterad: 2017-12-06
    4. Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
    Öppna denna publikation i ny flik eller fönster >>Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
    2017 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 23, nr 1, s. 901-910Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

    Ort, förlag, år, upplaga, sidor
    Institute of Electrical and Electronics Engineers (IEEE), 2017
    Nyckelord
    Volume rendering, photon mapping, global illumination, participating media
    Nationell ämneskategori
    Mediateknik
    Identifikatorer
    urn:nbn:se:liu:diva-131022 (URN)10.1109/TVCG.2016.2598430 (DOI)000395537600093 ()27514045 (PubMedID)2-s2.0-84999158356 (Scopus ID)
    Projekt
    SERCCMIV
    Anmärkning

    Funding Agencies|Swedish e-Science Research Centre (SeRC)||Swedish Research Council (VR) grant 2016-05462||Knut and Alice Wallenberg Foundation (KAW) grant 2016-0076||

    Tillgänglig från: 2016-09-05 Skapad: 2016-09-05 Senast uppdaterad: 2017-04-20Bibliografiskt granskad
    5. Boundary Aware Reconstruction of Scalar Fields
    Öppna denna publikation i ny flik eller fönster >>Boundary Aware Reconstruction of Scalar Fields
    2014 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, nr 12, s. 2447-2455Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In visualization, the combined role of data reconstruction and its classification plays a crucial role. In this paper we propose a novel approach that improves classification of different materials and their boundaries by combining information from the classifiers at the reconstruction stage. Our approach estimates the targeted materials’ local support before performing multiple material-specific reconstructions that prevent much of the misclassification traditionally associated with transitional regions and transfer function (TF) design. With respect to previously published methods our approach offers a number of improvements and advantages. For one, it does not rely on TFs acting on derivative expressions, therefore it is less sensitive to noisy data and the classification of a single material does not depend on specialized TF widgets or specifying regions in a multidimensional TF. Additionally, improved classification is attained without increasing TF dimensionality, which promotes scalability to multivariate data. These aspects are also key in maintaining low interaction complexity. The results are simple-to-achieve visualizations that better comply with the user’s understanding of discrete features within the studied object.

    Ort, förlag, år, upplaga, sidor
    IEEE Press, 2014
    Nationell ämneskategori
    Data- och informationsvetenskap Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-110227 (URN)10.1109/TVCG.2014.2346351 (DOI)000344991700090 ()
    Tillgänglig från: 2014-09-04 Skapad: 2014-09-04 Senast uppdaterad: 2018-01-11Bibliografiskt granskad
    6. Intuitive Exploration of Volumetric Data Using Dynamic Galleries
    Öppna denna publikation i ny flik eller fönster >>Intuitive Exploration of Volumetric Data Using Dynamic Galleries
    2016 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 22, nr 1, s. 896-905Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.

    Ort, förlag, år, upplaga, sidor
    IEEE COMPUTER SOC, 2016
    Nyckelord
    Transfer function; scalar fields; volume rendering; touch interaction; visualization; user interfaces
    Nationell ämneskategori
    Elektroteknik och elektronik
    Identifikatorer
    urn:nbn:se:liu:diva-123054 (URN)10.1109/TVCG.2015.2467294 (DOI)000364043400095 ()26390481 (PubMedID)
    Anmärkning

    Funding Agencies|Swedish Research Council, VR [2011-5816]; Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Linnaeus Environment CADICS; Swedish e-Science Research Centre (SeRC)

    Tillgänglig från: 2015-12-04 Skapad: 2015-12-03 Senast uppdaterad: 2017-12-01
    Ladda ner fulltext (pdf)
    Enhancing Salient Features in Volumetric Data Using Illumination and Transfer Functions
    Ladda ner (pdf)
    omslag
    Ladda ner (jpg)
    presentationsbild
  • 18.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Falk, Martin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Intuitive Exploration of Volumetric Data Using Dynamic Galleries2016Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 22, nr 1, s. 896-905Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.

    Ladda ner fulltext (pdf)
    fulltext
  • 19.
    Sundén, Erik
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Steneteg, Peter
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kottravel, Sathish
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Englund, Rickard
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Falk, Martin
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Ropinski, Timo
    University of Ulm, Germany.
    Inviwo - An Extensible, Multi-Purpose Visualization Framework2015Ingår i: 2015 IEEE Scientific Visualization Conference (SciVis), IEEE , 2015, s. 163-164Konferensbidrag (Refereegranskat)
    Abstract [en]

    To enable visualization research impacting other scientific domains, the availability of easy-to-use visualization frameworks is essential. Nevertheless, an easy-to-use system also has to be adapted to the capabilities of modern hardware architectures, as only this allows for realizing interactive visualizations. With this trade-off in mind, we have designed and realized the cross-platform Inviwo (Interactive Visualization Workshop) visualization framework, that supports both interactive visualization research as well as efficient visualization application development and deployment. In this poster we give an overview of the architecture behind Inviwo, and show how its design enables us and other researchers to realize their visualization ideas efficiently. Inviwo consists of a modern and lightweight, graphics independent core, which is extended by optional modules that encapsulate visualization algorithms, well-known utility libraries and commonly used parallel-processing APIs (such as OpenGL and OpenCL). The core enables a simplistic structure for creating bridges between the different modules regarding data transfer across architecture and devices with an easy-to-use screen graph and minimalistic programming. Making the base structures in a modern way while providing intuitive methods of extending the functionality and creating modules based on other modules, we hope that Inviwo can help the visualization community to perform research through a rapid-prototyping design and GUI, while at the same time allowing users to take advantage of the results implemented in the system in any way they desire later on. Inviwo is publicly available at www.inviwo.org, and can be used freely by anyone under a permissive free software license (Simplified BSD).

  • 20.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Sundén, Erik
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering2014Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, nr 1, s. 27-51Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

    Ladda ner fulltext (pdf)
    fulltext
  • 21.
    Lindholm, Stefan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Hansen, Charles
    School of Computing, University of Utah, USA.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Tekniska högskolan.
    Boundary Aware Reconstruction of Scalar Fields2014Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, nr 12, s. 2447-2455Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In visualization, the combined role of data reconstruction and its classification plays a crucial role. In this paper we propose a novel approach that improves classification of different materials and their boundaries by combining information from the classifiers at the reconstruction stage. Our approach estimates the targeted materials’ local support before performing multiple material-specific reconstructions that prevent much of the misclassification traditionally associated with transitional regions and transfer function (TF) design. With respect to previously published methods our approach offers a number of improvements and advantages. For one, it does not rely on TFs acting on derivative expressions, therefore it is less sensitive to noisy data and the classification of a single material does not depend on specialized TF widgets or specifying regions in a multidimensional TF. Additionally, improved classification is attained without increasing TF dimensionality, which promotes scalability to multivariate data. These aspects are also key in maintaining low interaction complexity. The results are simple-to-achieve visualizations that better comply with the user’s understanding of discrete features within the studied object.

  • 22.
    Parulek, Julius
    et al.
    University of Bergen, Norway.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bruckner, Stefan
    University of Bergen, Norway.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Viola, Ivan
    University of Bergen, Norway; Vienna University of Technology, Austria.
    Continuous Levels-of-Detail and Visual Abstraction for Seamless Molecular Visualization2014Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, nr 6, s. 276-287Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Molecular visualization is often challenged with rendering of large molecular structures in real time. We introduce a novel approach that enables us to show even large protein complexes. Our method is based on the level-of-detail concept, where we exploit three different abstractions combined in one visualization. Firstly, molecular surface abstraction exploits three different surfaces, solvent-excluded surface (SES), Gaussian kernels and van der Waals spheres, combined as one surface by linear interpolation. Secondly, we introduce three shading abstraction levels and a method for creating seamless transitions between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation of a cluster of atoms with constant shading and without contours provide the context. Thirdly, we propose a hierarchical abstraction based on a set of clusters formed on molecular atoms. All three abstraction models are driven by one importance function classifying the scene into the near-, mid- and far-field. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts.

    Ladda ner fulltext (pdf)
    fulltext
  • 23.
    Sundén, Erik
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bock, Alexander
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Interaction Techniques as a Communication Channel when Presenting 3D Visualizations2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this position paper we discuss the usage of various interaction technologies with focus on the presentations of 3D visualizations involving a presenter and an audience. While an interaction technique is commonly evaluated from a user perspective, we want to shift the focus from a sole analysis of the naturalness and the ease-of-use for the user, to focus on how expressive and understandable the interaction technique is when witnessed by the audience. The interaction process itself can be considered to be a communication channel and a more expressive interaction technique might make it easier for the audience to comprehend the presentation. Thus, while some natural interaction techniques for interactive visualization are easy to perform by the presenter, they may be less beneficial when interacting with the visualization in front of (and for) an audience. Our observations indicate that the suitability of an interaction technique as a communication channel is highly dependent on the setting in which the interaction takes place. Therefore, we analyze different presentation scenarios in an exemplary fashion and discuss how beneficial and comprehensive the involved techniques are for the audience. We argue that interaction techniques complement the visualization in an interactive presentation scenario as they also serve as an important communication channel, and should therefore also be observed from an audience perspective rather than exclusively a user perspective.

    Ladda ner fulltext (pdf)
    fulltext
  • 24.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Dahlin, Johan
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kok, Manon
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Schön, Thomas
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan. Uppsala Universitet.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Real-time video based lighting using GPU raytracing2014Ingår i: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE Signal Processing Society, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    The recent introduction of HDR video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment maps sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons.

    Ladda ner fulltext (pdf)
    fulltext
  • 25.
    Etiene, Tiago
    et al.
    University of Utah, UT 84112 USA .
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Scheidegger, Carlos
    ATandT Labs Research, NJ 07932 USA .
    Comba, Joao L. D.
    University of Federal Rio Grande do Sul, Brazil .
    Gustavo Nonato, Luis
    University of Sao Paulo, Brazil .
    Kirby, Robert M.
    University of Utah, UT 84112 USA .
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Silva, Claudio T.
    NYU, NY 11201 USA .
    Verifying Volume Rendering Using Discretization Error Analysis2014Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, nr 1, s. 140-154Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.

    Ladda ner fulltext (pdf)
    fulltext
  • 26.
    Lindholm, Stefan
    et al.
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Knutsson, Hans
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Towards Data Centric Sampling for Volume Rendering2013Ingår i: SIGRAD 2013 / [ed] T. Ropinski and J. Unger, Linköping University Electronic Press , 2013, s. 55-60Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a new method for sampling the volume rendering integral in volume raycasting where samples are correlated based on transfer function content and data set values. This has two major advantages. First, visual artifacts stemming from structured noise, such as wood grain, can be reduced. Second, we will show that the volume data does not longer need to be available during the rendering phase; a surface representation is used instead, which opens up ample oppurtinities for rendering of large data. We will show that the proposed sampling method gives higher quality renderings with fewer samples when compared to regular sampling in the spatial domain.

  • 27.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Löw, Joakim
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ljung, Patric
    Siemens.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -2012Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 3, s. 447-462Artikel i tidskrift (Refereegranskat)
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

    Ladda ner fulltext (pdf)
    FULLTEXT02
  • 28.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ganestam, Per
    Lunds Universitet, Institutionen för Datavetenskap.
    Doggett, Michael
    Lunds Universitet, Institutionen för Datavetenskap.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Explicit Cache Management for Volume Ray-Casting on Parallel Architectures2012Ingår i: Eurographics Symposium on Parallel Graphics and Visualization (2012), Eurographics - European Association for Computer Graphics, 2012, s. 31-40Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    A major challenge when designing general purpose graphics hardware is to allow efficient access to texture data. Although different rendering paradigms vary with respect to their data access patterns, there is no flexibility when it comes to data caching provided by the graphics architecture. In this paper we focus on volume ray-casting, and show the benefits of algorithm-aware data caching. Our Marching Caches method exploits inter-ray coherence and thus utilizes the memory layout of the highly parallel processors by allowing them to share data througha cache which marches along with the ray front. By exploiting Marching Caches we can apply higher-order reconstruction and enhancement filters to generate more accurate and enriched renderings with an improved rendering performance. We have tested our Marching Caches with seven different filters, e. g., Catmul-Rom, B-spline, ambient occlusion projection, and could show that a speed up of four times can be achieved compared to using the caching implicitly provided by the graphics hardware, and that the memory bandwidth to global memory can be reduced by orders of magnitude. Throughout the paper, we will introduce the Marching Cache concept, provide implementation details and discuss the performance and memory bandwidth impact when using different filters.

  • 29.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping2012Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 12, s. 2364-2371Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

  • 30.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Sundén, Erik
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    State of The Art Report on Interactive Volume Rendering with Volumetric Illumination2012Ingår i: Eurographics 2012 - State of the Art Reports / [ed] Marie-Paule Cani and Fabio Ganovelli, Eurographics - European Association for Computer Graphics, 2012, s. 53-74Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shadowing and scattering effects. In this article, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

1 - 30 av 30
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf