liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 48 of 48
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Auer, Cornelia
    et al.
    Zuse Institut Berlin, Germany.
    Hotz, Ingrid
    Zuse Institut Berlin, Germany.
    Complete Tensor Field Topology on 2D Triangulated Manifolds embedded in 3D2011In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 30, no 3, p. 831-840Article in journal (Refereed)
    Abstract [en]

    This paper is concerned with the extraction of the surface topology of tensor fields on 2D triangulated manifoldsembedded in 3D. In scientific visualization topology is a meaningful instrument to get a hold on the structure of agiven dataset. Due to the discontinuity of tensor fields on a piecewise planar domain, standard topology extractionmethods result in an incomplete topological skeleton. In particular with regard to the high computational costs ofthe extraction this is not satisfactory. This paper provides a method for topology extraction of tensor fields thatleads to complete results. The core idea is to include the locations of discontinuity into the topological analysis.For this purpose the model of continuous transition bridges is introduced, which allows to capture the entiretopology on the discontinuous field. The proposed method is applied to piecewise linear three-dimensional tensorfields defined on the vertices of the triangulation and for piecewise constant two or three-dimensional tensor fieldsgiven per triangle, e.g. rate of strain tensors of piecewise linear flow fields.

  • 2.
    Axelsson, Emil
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Costa, Jonathas
    NYU, NY 10003 USA.
    Silva, Claudio
    NYU, NY 10003 USA.
    Emmart, Carter
    Amer Museum Nat Hist, NY 10024 USA.
    Bock, Alexander
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Dynamic Scene Graph: Enabling Scaling, Positioning, and Navigation in the Universe2017In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 3, p. 459-468Article in journal (Refereed)
    Abstract [en]

    In this work, we address the challenge of seamlessly visualizing astronomical data exhibiting huge scale differences in distance, size, and resolution. One of the difficulties is accurate, fast, and dynamic positioning and navigation to enable scaling over orders of magnitude, far beyond the precision of floating point arithmetic. To this end we propose a method that utilizes a dynamically assigned frame of reference to provide the highest possible numerical precision for all salient objects in a scene graph. This makes it possible to smoothly navigate and interactively render, for example, surface structures on Mars and the Milky Way simultaneously. Our work is based on an analysis of tracking and quantification of the propagation of precision errors through the computer graphics pipeline using interval arithmetic. Furthermore, we identify sources of precision degradation, leading to incorrect object positions in screen-space and z-fighting. Our proposed method operates without near and far planes while maintaining high depth precision through the use of floating point depth buffers. By providing interoperability with order-independent transparency algorithms, direct volume rendering, and stereoscopy, our approach is well suited for scientific visualization. We provide the mathematical background, a thorough description of the method, and a reference implementation.

    Download full text (pdf)
    fulltext
  • 3.
    Baeuerle, A.
    et al.
    Ulm Univ, Germany.
    van Onzenoodt, C.
    Ulm Univ, Germany.
    der Kinderen, S.
    Ulm Univ, Germany.
    Johansson Westberg, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Ulm Univ, Germany.
    Ropinski, T.
    Ulm Univ, Germany.
    Where did my Lines go? Visualizing Missing Data in Parallel Coordinates2022In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 41, no 3, p. 235-246Article in journal (Refereed)
    Abstract [en]

    We evaluate visualization concepts to represent missing values in parallel coordinates. We focus on the trade-off between the ability to perceive missing values and the concepts impact on common tasks. For this purpose, we identified three missing value representation concepts: removing line segments where values are missing, adding a separate, horizontal axis onto which missing values are projected, and using imputed values as a replacement for missing values. For the missing values axis and imputed values concepts, we additionally add downplay and highlight variations. We performed a crowd-sourced, quantitative user study with 732 participants comparing the concepts and their variations using five real-world datasets. Based on our findings, we provide suggestions regarding which visual encoding to employ depending on the task at focus.

    Download full text (pdf)
    fulltext
  • 4.
    Besancon, Lonni
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Semmo, Amir
    Univ Potsdam, Germany.
    Biau, David
    AP HP, France.
    Frachet, Bruno
    AP HP, France.
    Pineau, Virginie
    Inst Curie, France.
    Sariali, El Hadi
    AP HP, France.
    Soubeyrand, Marc
    AP HP, France.
    Taouachi, Rabah
    Inst Curie, France.
    Isenberg, Tobias
    INRIA, France.
    Dragicevic, Pierre
    INRIA, France.
    Reducing Affective Responses to Surgical Images and Videos Through Stylization2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 1, p. 462-483Article in journal (Refereed)
    Abstract [en]

    We present the first empirical study on using colour manipulation and stylization to make surgery images/videos more palatable. While aversion to such material is natural, it limits many peoples ability to satisfy their curiosity, educate themselves and make informed decisions. We selected a diverse set of image processing techniques to test them both on surgeons and lay people. While colour manipulation techniques and many artistic methods were found unusable by surgeons, edge-preserving image smoothing yielded good results both for preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). We then conducted a second set of interview with surgeons to assess whether these methods could also be used on videos and derive good default parameters for information preservation. We provide extensive supplemental material at .

  • 5.
    Besancon, Lonni
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Univ Paris Saclay, France.
    Sereno, Mickael
    Inria, France; Univ Paris Saclay, France.
    Yu, Lingyun
    Univ Groningen, Netherlands.
    Ammi, Mehdi
    Univ Paris 08, France.
    Isenberg, Tobias
    Inria, France.
    Hybrid Touch/Tangible Spatial 3D Data Selection2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, no 3, p. 553-567Article in journal (Refereed)
    Abstract [en]

    We discuss spatial selection techniques for three-dimensional datasets. Such 3D spatial selection is fundamental to exploratory data analysis. While 2D selection is efficient for datasets with explicit shapes and structures, it is less efficient for data without such properties. We first propose a new taxonomy of 3D selection techniques, focusing on the amount of control the user has to define the selection volume. We then describe the 3D spatial selection technique Tangible Brush, which gives manual control over the final selection volume. It combines 2D touch with 6-DOF 3D tangible input to allow users to perform 3D selections in volumetric data. We use touch input to draw a 2D lasso, extruding it to a 3D selection volume based on the motion of a tangible, spatially-aware tablet. We describe our approach and present its quantitative and qualitative comparison to state-of-the-art structure-dependent selection. Our results show that, in addition to being dataset-independent, Tangible Brush is more accurate than existing dataset-dependent techniques, thus providing a trade-off between precision and effort.

  • 6.
    Besançon, Lonni
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Monash Univ, Australia.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Keefe, Daniel F.
    Univ Minnesota, MN USA.
    Yu, Lingyun
    Xian Jiaotong Liverpool Univ, Peoples R China.
    Isenberg, Tobias
    Univ Paris Saclay, France.
    The State of the Art of Spatial Interfaces for 3D Visualization2021In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 40, no 1, p. 293-326Article in journal (Refereed)
    Abstract [en]

    We survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under-explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction.

    Download full text (pdf)
    fulltext
  • 7.
    Bock, Alexander
    et al.
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering.
    Svensson, Åsa
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kleiner, Alexander
    iRobot, CA USA.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Ulm University, Germany.
    A Visualization-Based Analysis System for Urban Search & Rescue Mission Planning Support2017In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 6, p. 148-159Article in journal (Refereed)
    Abstract [en]

    We propose a visualization system for incident commanders (ICs) in urban searchandrescue scenarios that supports path planning in post-disaster structures. Utilizing point cloud data acquired from unmanned robots, we provide methods for the assessment of automatically generated paths. As data uncertainty and a priori unknown information make fully automated systems impractical, we present the IC with a set of viable access paths, based on varying risk factors, in a 3D environment combined with visual analysis tools enabling informed decision making and trade-offs. Based on these decisions, a responder is guided along the path by the IC, who can interactively annotate and reevaluate the acquired point cloud and generated paths to react to the dynamics of the situation. We describe visualization design considerations for our system and decision support systems in general, technical realizations of the visualization components, and discuss the results of two qualitative expert evaluation; one online study with nine searchandrescue experts and an eye-tracking study in which four experts used the system on an application case.

    Download full text (pdf)
    fulltext
  • 8.
    Bujack, Roxana
    et al.
    Los Alamos Natl Lab, NM 87545 USA.
    Yan, Lin
    Univ Utah, UT 84112 USA.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Garth, Christoph
    Univ Kaiserslautern, Germany.
    Wang, Bei
    Univ Utah, UT 84112 USA.
    State of the Art in Time-Dependent Flow Topology: Interpreting Physical Meaningfulness Through Mathematical Properties2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 3, p. 811-835Article in journal (Refereed)
    Abstract [en]

    We present a state-of-the-art report on time-dependent flow topology. We survey representative papers in visualization and provide a taxonomy of existing approaches that generalize flow topology from time-independent to time-dependent settings. The approaches are classified based upon four categories: tracking of steady topology, reference frame adaption, pathline classification or clustering, and generalization of critical points. Our unique contributions include introducing a set of desirable mathematical properties to interpret physical meaningfulness for time-dependent flow visualization, inferring mathematical properties associated with selective research papers, and utilizing such properties for classification. The five most important properties identified in the existing literature include coincidence with the steady case, induction of a partition within the domain, Lagrangian invariance, objectivity, and Galilean invariance.

    Download full text (pdf)
    fulltext
  • 9.
    Chatzimparmpas, A.
    et al.
    Linnaeus Univ, Sweden.
    Paulovich, F. V
    Eindhoven Univ Technol, Netherlands.
    Kerren, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linnaeus Univ, Sweden.
    HardVis: Visual Analytics to Handle Instance Hardness Using Undersampling and Oversampling Techniques2023In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 42, no 1, p. 135-154Article in journal (Refereed)
    Abstract [en]

    Despite the tremendous advances in machine learning (ML), training with imbalanced data still poses challenges in many real-world applications. Among a series of diverse techniques to solve this problem, sampling algorithms are regarded as an efficient solution. However, the problem is more fundamental, with many works emphasizing the importance of instance hardness. This issue refers to the significance of managing unsafe or potentially noisy instances that are more likely to be misclassified and serve as the root cause of poor classification performance.This paper introduces HardVis, a visual analytics system designed to handle instance hardness mainly in imbalanced classification scenarios. Our proposed system assists users in visually comparing different distributions of data types, selecting types of instances based on local characteristics that will later be affected by the active sampling method, and validating which suggestions from undersampling or oversampling techniques are beneficial for the ML model. Additionally, rather than uniformly undersampling/oversampling a specific class, we allow users to find and sample easy and difficult to classify training instances from all classes. Users can explore subsets of data from different perspectives to decide all those parameters, while HardVis keeps track of their steps and evaluates the models predictive performance in a test set separately. The end result is a well-balanced data set that boosts the predictive power of the ML model. The efficacy and effectiveness of HardVis are demonstrated with a hypothetical usage scenario and a use case. Finally, we also look at how useful our system is based on feedback we received from ML experts.

    Download full text (pdf)
    fulltext
  • 10.
    Chatzimparmpas, Angelos
    et al.
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Martins, Rafael Messias
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Jusufi, Ilir
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Kucher, Kostiantyn
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    Rossi, Fabrice
    Université Paris Dauphine, France.
    Kerren, Andreas
    Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM).
    The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 3, p. 713-756Article in journal (Refereed)
    Abstract [en]

    Machine learning (ML) models are nowadays used in complex applications in various domains such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.

  • 11.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Sweden.
    Martins, Rafael Messias
    Linnaeus University, Sweden.
    Kucher, Kostiantyn
    Linnaeus University, Sweden.
    Kerren, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linnaeus University, Sweden.
    VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization2021In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 40, no 3, p. 201-214Article in journal (Refereed)
    Abstract [en]

    During the training phase of machine learning (ML) models, it is usually necessary to configure several hyperparameters. This process is computationally intensive and requires an extensive search to infer the best hyperparameter set for the given problem. The challenge is exacerbated by the fact that most ML models are complex internally, and training involves trial-and-error processes that could remarkably affect the predictive result. Moreover, each hyperparameter of an ML algorithm is potentially intertwined with the others, and changing it might result in unforeseeable impacts on the remaining hyperparameters. Evolutionary optimization is a promising method to try and address those issues. According to this method, performant models are stored, while the remainder are improved through crossover and mutation processes inspired by genetic algorithms. We present VisEvol, a visual analytics tool that supports interactive exploration of hyperparameters and intervention in this evolutionary procedure. In summary, our proposed tool helps the user to generate new models through evolution and eventually explore powerful hyperparameter combinations in diverse regions of the extensive hyperparameter space. The outcome is a voting ensemble (with equal rights) that boosts the final predictive performance. The utility and applicability of VisEvol are demonstrated with two use cases and interviews with ML experts who evaluated the effectiveness of the tool.

  • 12.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, R. K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A comparative review of tone-mapping algorithms for high dynamic range video2017In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 2, p. 565-592Article in journal (Refereed)
    Abstract [en]

    Tone-mapping constitutes a key component within the field of high dynamic range (HDR) imaging. Its importance is manifested in the vast amount of tone-mapping methods that can be found in the literature, which are the result of an active development in the area for more than two decades. Although these can accommodate most requirements for display of HDR images, new challenges arose with the advent of HDR video, calling for additional considerations in the design of tone-mapping operators (TMOs). Today, a range of TMOs exist that do support video material. We are now reaching a point where most camera captured HDR videos can be prepared in high quality without visible artifacts, for the constraints of a standard display device. In this report, we set out to summarize and categorize the research in tone-mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline. While this gives a wide overview over the area, we then specifically focus on tone-mapping of HDR video and the problems this medium entails. First, we formulate the major challenges a video TMO needs to address. Then, we provide a description and categorization of each of the existing video TMOs. Finally, by constructing a set of quantitative measures, we evaluate the performance of a number of the operators, in order to give a hint on which can be expected to render the least amount of artifacts. This serves as a comprehensive reference, categorization and comparative assessment of the state-of-the-art in tone-mapping for HDR video.

  • 13.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, Wales .
    Mantiuk, Rafal K.
    Bangor University, Wales .
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Evaluation of Tone Mapping Operators for HDR-Video2013In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 32, no 7, p. 275-284Article in journal (Refereed)
    Abstract [en]

    Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.

    Download full text (pdf)
    Preprint
  • 14.
    Engelke, Wito
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lawonn, Kai
    Department of Simulation and Graphics, University of Magdeburg, Germany / Institute for Computational Visualistics, University of Koblenz‐Landau, Germany.
    Preim, Bernhard
    Department of Simulation and Graphics, University of Magdeburg, Germany.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Autonomous Particles for Interactive Flow Visualization2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, no 1, p. 248-259Article in journal (Refereed)
    Abstract [en]

    We present an interactive approach to analyse flow fields using a new type of particle system, which is composed of autonomous particles exploring the flow. While particles provide a very intuitive way to visualize flows, it is a challenge to capture the important features with such systems. Particles tend to cluster in regions of low velocity and regions of interest are often sparsely populated. To overcome these disadvantages, we propose an automatic adaption of the particle density with respect to local importance measures. These measures are user defined and the systems sensitivity to them can be adjusted interactively. Together with the particle history, these measures define a probability for particles to multiply or die, respectively. There is no communication between the particles and no neighbourhood information has to be maintained. Thus, the particles can be handled in parallel and support a real‐time investigation of flow fields. To enhance the visualization, the particles' properties and selected field measures are also used to specify the systems rendering parameters, such as colour and size. We demonstrate the effectiveness of our approach on different simulated vector fields from technical and medical applications.

    Download full text (pdf)
    Autonomous Particles for Interactive Flow Visualization
  • 15.
    Englund, Rickard
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Quantitative and Qualitative Analysis of the Perception of Semi-Transparent Structures in Direct Volume Rendering2018In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 37, no 6, p. 174-187Article in journal (Refereed)
    Abstract [en]

    Abstract Direct Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi-transparency is facilitated to convey the complexity of the data. Unfortunately, semi-transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi-transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi-transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images. In this study, we investigated the perceptual performance of these techniques and have compared them against each other in a large-scale quantitative user study with 300 participants. Each participant completed micro-tasks designed such that the aggregated feedback gives insight on how well these techniques aid the user to perceive depth and shape of objects. To further clarify the findings, we conducted a qualitative evaluation in which we interviewed three experienced visualization researchers, in order to find out if we can identify the benefits and shortcomings of the individual techniques.

  • 16.
    Falk, Martin
    et al.
    Visualization Research Center (VISUS), University of Stuttgart, Germany.
    Krone, Michael
    Visualization Research Center (VISUS), University of Stuttgart, Germany.
    Ertl, Thomas
    Visualization Research Center (VISUS), University of Stuttgart, Germany.
    Atomistic Visualization of Mesoscopic Whole-Cell Simulations Using Ray-Casted Instancing2013In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 32, no 8, p. 195-206Article in journal (Refereed)
    Abstract [en]

    Molecular visualization is an important tool for analysing the results of biochemical simulations. With modern GPU ray casting approaches, it is only possible to render several million of atoms interactively unless advanced acceleration methods are employed. Whole-cell simulations consist of at least several billion atoms even for simplified cell models. However, many instances of only a few different proteins occur in the intracellular environment, which can be exploited to fit the data into the graphics memory. For each protein species, one model is stored and rendered once per instance. The proposed method exploits recent algorithmic advances for particle rendering and the repetitive nature of intracellular proteins to visualize dynamic results from mesoscopic simulations of cellular transport processes. We present two out-of-core optimizations for the interactive visualization of data sets composed of billions of atoms as well as details on the data preparation and the employed rendering techniques. Furthermore, we apply advanced shading methods to improve the image quality including methods to enhance depth and shape perception besides non-photorealistic rendering methods. We also show that the method can be used to render scenes that are composed of triangulated instances, not only implicit surfaces.

    Download full text (pdf)
    fulltext
  • 17.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Inria Rennes.
    Guillemot, Christine
    Inria Rennes.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Single Sensor Compressive Light Field Video Camera2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 2, p. 463-474Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatioangular- spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.

    Download full text (pdf)
    fulltext
  • 18.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Per, Larsson
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Tran, Kiet
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Light Field Video Compression and Real Time Rendering2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, p. 265-276Article in journal (Refereed)
    Abstract [en]

    Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

  • 19.
    Hergl, Chiara
    et al.
    Univ Leipzig, Germany.
    Blecha, Christian
    Univ Leipzig, Germany.
    Kretzschmar, Vanessa
    Univ Leipzig, Germany.
    Raith, Felix
    Univ Leipzig, Germany.
    Gunther, Fabian
    TU Dortmund Univ, Germany.
    Stommel, Markus
    Leibniz Inst Polymer Res Dresden, Germany.
    Jankowai, Jochen
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nagel, Thomas
    Tech Univ Bergakad Freiberg, Germany.
    Scheuermann, Gerik
    Univ Leipzig, Germany.
    Visualization of Tensor Fields in Mechanics2021In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 40, no 6, p. 135-161Article in journal (Refereed)
    Abstract [en]

    Tensors are used to describe complex physical processes in many applications. Examples include the distribution of stresses in technical materials, acting forces during seismic events, or remodeling of biological tissues. While tensors encode such complex information mathematically precisely, the semantic interpretation of a tensor is challenging. Visualization can be beneficial here and is frequently used by domain experts. Typical strategies include the use of glyphs, color plots, lines, and isosurfaces. However, data complexity is nowadays accompanied by the sheer amount of data produced by large-scale simulations and adds another level of obstruction between user and data. Given the limitations of traditional methods, and the extra cognitive effort of simple methods, more advanced tensor field visualization approaches have been the focus of this work. This survey aims to provide an overview of recent research results with a strong application-oriented focus, targeting applications based on continuum mechanics, namely the fields of structural, bio-, and geomechanics. As such, the survey is complementing and extending previously published surveys. Its utility is twofold: (i) It serves as basis for the visualization community to get an overview of recent visualization techniques. (ii) It emphasizes and explains the necessity for further research for visualizations in this context.

    Download full text (pdf)
    fulltext
  • 20.
    Hermosilla, P.
    et al.
    Ulm Univ, Germany.
    Maisch, S.
    Ulm Univ, Germany.
    Ritschel, T.
    UCL, England.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Ulm Univ, Germany.
    Deep-learning the Latent Space of Light Transport2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, no 4, p. 207-217Article in journal (Refereed)
    Abstract [en]

    We suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.

  • 21.
    Huang, Zeyang
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Witschard, Daniel
    Department of Computer Science and Media Technology, Linnaeus University, Sweden.
    Kucher, Kostiantyn
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Science and Technology, Media and Information Technology.
    Kerren, Andreas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Department of Computer Science and Media Technology, Linnaeus University, Sweden.
    VA + Embeddings STAR: A State-of-the-Art Report on the Use of Embeddings in Visual Analytics2023In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 42, no 3, p. 539-571Article in journal (Refereed)
    Abstract [en]

    Over the past years, an increasing number of publications in information visualization, especially within the field of visual analytics, have mentioned the term “embedding” when describing the computational approach. Within this context, embeddings are usually (relatively) low-dimensional, distributed representations of various data types (such as texts or graphs), and since they have proven to be extremely useful for a variety of data analysis tasks across various disciplines and fields, they have become widely used. Existing visualization approaches aim to either support exploration and interpretation of the embedding space through visual representation and interaction, or aim to use embeddings as part of the computational pipeline for addressing downstream analytical tasks. To the best of our knowledge, this is the first survey that takes a detailed look at embedding methods through the lens of visual analytics, and the purpose of our survey article is to provide a systematic overview of the state of the art within the emerging field of embedding visualization. We design a categorization scheme for our approach, analyze the current research frontier based on peer-reviewed publications, and discuss existing trends, challenges, and potential research directions for using embeddings in the context of visual analytics. Furthermore, we provide an interactive survey browser for the collected and categorized survey data, which currently includes 122 entries that appeared between 2007 and 2023.

    Download full text (pdf)
    fulltext
  • 22.
    Jankowai, Jochen
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Wang, Bei
    Univ Utah, UT 84112 USA.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Robust Extraction and Simplification of 2D Symmetric Tensor Field Topology2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, no 3, p. 337-349Article in journal (Refereed)
    Abstract [en]

    In this work, we propose a controlled simplification strategy for degenerated points in symmetric 2D tensor fields that is based on the topological notion of robustness. Robustness measures the structural stability of the degenerate points with respect to variation in the underlying field. We consider an entire pipeline for generating a hierarchical set of degenerate points based on their robustness values. Such a pipeline includes the following steps: the stable extraction and classification of degenerate points using an edge labeling algorithm, the computation and assignment of robustness values to the degenerate points, and the construction of a simplification hierarchy. We also discuss the challenges that arise from the discretization and interpolation of real world data.

    Download full text (pdf)
    fulltext
  • 23.
    Johansson, Jimmy
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    A screen space quality method for data abstraction2008In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 27, no 3, p. 1039-1046Article in journal (Refereed)
    Abstract [en]

    The rendering of large data sets can result in cluttered displays and non-interactive update rates, leading to time consuming analyses. A straightforward solution is to reduce the number of items, thereby producing an abstraction of the data set. For the visual analysis to remain accurate, the graphical representation of the abstraction must preserve the significant features present in the original data. This paper presents a screen space quality method, based on distance transforms, that measures the visual quality of a data abstraction. This screen space measure is shown to better capture significant visual structures in data, compared with data space measures. The presented method is implemented on the GPU, allowing interactive creation of high quality graphical representations of multivariate data sets containing tens of thousands of items. © 2008 The Eurographics Association and Blackwell Publishing Ltd.

  • 24.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Bergström, Albin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Forsell, Camilla
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Simon, Rozalyn
    Linköping University, Department of Health, Medicine and Caring Sciences, Division of Diagnostics and Specialist Medicine. Linköping University, Faculty of Medicine and Health Sciences.
    Engström, Maria
    Linköping University, Department of Health, Medicine and Caring Sciences, Division of Diagnostics and Specialist Medicine. Linköping University, Faculty of Medicine and Health Sciences.
    Walter, Susanna
    Linköping University, Department of Biomedical and Clinical Sciences, Division of Inflammation and Infection. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Mag- tarmmedicinska kliniken.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    VisualNeuro: A Hypothesis Formation and Reasoning Application for Multi-Variate Brain Cohort Study Data2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 6, p. 392-407Article in journal (Refereed)
    Abstract [en]

    We present an application, and its development process, for interactive visual analysis of brain imaging data and clinical measurements. The application targets neuroscientists interested in understanding the correlations between active brain regions and physiological or psychological factors. The application has been developed in a participatory design process and has subsequently been released as the free software VisualNeuro. From initial observations of the neuroscientists workflow, we concluded that while existing tools provide powerful analysis options, they lack effective interactive exploration requiring the use of many tools side by side. Consequently, our application has been designed to simplify the workflow combining statistical analysis with interactive visual exploration. The resulting environment comprises parallel coordinates for effective overview and selection, Welchs t-test to filter out brain regions with statistically significant differences and multiple visualizations for comparison between brain regions and clinical parameters. These exploration concepts enable neuroscientists to interactively explore the complex bidirectional interplay between clinical and brain measurements and easily compare different patient groups. A qualitative user study has been performed with three neuroscientists from different domains. The study shows that the developed environment supports simultaneous analysis of more parameters, provides rapid pathways to insights and is an effective tool for hypothesis formation.

    Download full text (pdf)
    fulltext
  • 25.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering2014In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, no 1, p. 27-51Article in journal (Refereed)
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

    Download full text (pdf)
    fulltext
  • 26.
    Kottravel, Sathish
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. SeRC, Sweden.
    Falk, Martin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. SeRC, Sweden.
    Masood, Talha Bin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. SeRC, Sweden.
    Linares, Mathieu
    Linköping University, Department of Science and Technology, Laboratory of Organic Electronics. Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. SeRC, Sweden.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. SeRC, Sweden.
    Visual Analysis of Charge Flow Networks for Complex Morphologies2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, no 3, p. 479-489Article in journal (Refereed)
    Abstract [en]

    In the field of organic electronics, understanding complex material morphologies and their role in efficient charge transport in solar cells is extremely important. Related processes are studied using the Ising model and Kinetic Monte Carlo simulations resulting in large ensembles of stochastic trajectories. Naive visualization of these trajectories, individually or as a whole, does not lead to new knowledge discovery through exploration. In this paper, we present novel visualization and exploration methods to analyze this complex dynamic data, which provide succinct and meaningful abstractions leading to scientific insights. We propose a morphology abstraction yielding a network composed of material pockets and the interfaces, which serves as backbone for the visualization of the charge diffusion. The trajectory network is created using a novel way of implicitly attracting the trajectories to the skeleton of the morphology relying on a relaxation process. Each individual trajectory is then represented as a connected sequence of nodes in the skeleton. The final network summarizes all of these sequences in a single aggregated network. We apply our method to three different morphologies and demonstrate its suitability for exploring this kind of data.

    Download full text (pdf)
    fulltext
  • 27.
    Kozlikova, B.
    et al.
    Masaryk University, Czech Republic.
    Krone, M.
    University of Stuttgart, Germany.
    Falk, Martin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lindow, N.
    ZIB, Germany.
    Baaden, M.
    CNRS, France.
    Baum, D.
    ZIB, Germany.
    Viola, I.
    University of Bergen, Norway; TU Wien, Austria.
    Parulek, J.
    University of Bergen, Norway.
    Hege, H-C.
    ZIB, Germany.
    Visualization of Biomolecular Structures: State of the Art Revisited2017In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 8, p. 178-204Article in journal (Refereed)
    Abstract [en]

    Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three-dimensional, complex, large and time-varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.

    Download full text (pdf)
    fulltext
  • 28.
    Kratz, Andrea
    et al.
    Zuse Institute Berlin, Germany.
    Auer, Cornelia
    Zuse Institute Berlin, Germany.
    Stommel, Markus
    Technical University Dortmund, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Visualization and Analysis of Second-Order Tensors: Moving Beyond the Symmetric Positive-Definite Case: State of the Art Report2013In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 32, no 1, p. 49-74Article in journal (Refereed)
    Abstract [en]

    Tensors provide a powerful language to describe physical phenomena. Consequently, they have a long tradition in physics and appear in various application areas, either as the final result of simulations or as intermediate product. Due to their complexity, tensors are hard to interpret. This motivates the development of well-conceived visualization methods. As a sub-branch of scientific visualization, tensor field visualization has been especially pushed forward by diffusion tensor imaging. In this review, we focus on second-order tensors that are not diffusion tensors. Until now, these tensors, which might be neither positive-definite nor symmetric, are under-represented in visualization and existing visualization tools are often not appropriate for these tensors. Hence, we discuss the strengths and limitations of existing methods when dealing with such tensors as well as challenges introduced by them. The goal of this paper is to reveal the importance of the field and to encourage the development of new visualization methods for tensors from various application fields.

  • 29.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Lab, ISTI-CNR, Italy.
    Gardner, Andrew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Photorealistic rendering of mixed reality scenes2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed)
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

    Download full text (pdf)
    Photorealistic rendering of mixed reality scenes
  • 30.
    Krone, Michael
    et al.
    Visualization Research Center (VISUS), University of Stuttgart, Germany.
    Falk, Martin
    Visualization Research Center (VISUS), University of Stuttgart, Germany.
    Rehm, Sascha
    Institute for Technical Biochemistry (ITB), University of Stuttgart, Germany.
    Pleiss, Jürgen
    Institute for Technical Biochemistry (ITB), University of Stuttgart, Germany.
    Ertl, Thomas
    Visualization Research Center (VISUS), University of Stuttgart, Germany.
    Interactive Exploration of Protein Cavities2011In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 30, no 3, p. 673-682Article in journal (Refereed)
    Abstract [en]

    We present a novel application for the interactive exploration of cavities within proteins in dynamic data sets. Inside a protein, cavities can often be found close to the active center. Therefore, when analyzing a molecular dynamics simulation trajectory it is of great interest to find these cavities and determine if such a cavity opens up to the environment, making the binding site accessible to the surrounding substrate. Our user-driven approach enables expert users to select a certain cavity and track its evolution over time. The user is supported by different visualizations of the extracted cavity to facilitate the analysis. The boundary of the protein and its cavities is obtained by means of volume ray casting, where the volume is computed in real-time for each frame, therefore allowing the examination of time-dependent data sets. A fast, partial segmentation of the volume is applied to obtain the selected cavity and trace it over time. Domain experts found our method useful when they applied it exemplarily on two trajectories of lipases from Rhizomucor miehei and Candida antarctica. In both data sets cavities near the active center were easily identified and tracked over time until they reached the surface and formed an open substrate channel.

  • 31.
    Kucher, Kostiantyn
    et al.
    Linnéuniversitetet, Institutionen för datavetenskap (DV).
    Paradis, Carita
    Lund University, Sweden.
    Kerren, Andreas
    Linnéuniversitetet, Institutionen för datavetenskap (DV).
    The State of the Art in Sentiment Visualization2018In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 37, no 1, p. 71-96, article id CGF13217Article in journal (Refereed)
    Abstract [en]

    Visualization of sentiments and opinions extracted from or annotated in texts has become a prominent topic of research over the last decade. From basic pie and bar charts used to illustrate customer reviews to extensive visual analytics systems involving novel representations, sentiment visualization techniques have evolved to deal with complex multidimensional data sets, including temporal, relational, and geospatial aspects. This contribution presents a survey of sentiment visualization techniques based on a detailed categorization. We describe the background of sentiment analysis, introduce a categorization for sentiment visualization techniques that includes 7 groups with 35 categories in total, and discuss 132 techniques from peer-reviewed publications together with an interactive web-based survey browser. Finally, we discuss insights and opportunities for further research in sentiment visualization. We expect this survey to be useful for visualization researchers whose interests include sentiment or other aspects of text data as well as researchers and practitioners from other disciplines in search of efficient visualization techniques applicable to their tasks and data. 

  • 32.
    Lan, Fangfei
    et al.
    Univ Utah, UT 84112 USA.
    Young, Michael
    Univ Utah, UT 84112 USA.
    Anderson, Lauren
    Carnegie Inst Sci, DC 20005 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Univ Utah, UT 84112 USA.
    Borkin, Michelle A.
    Northeastern Univ, MA 02115 USA.
    Forbes, Angus G.
    Univ Calif Santa Cruz, CA 95064 USA.
    Kollmeier, Juna A.
    Carnegie Inst Sci, DC 20005 USA.
    Wang, Bei
    Univ Utah, UT 84112 USA; Univ Utah, UT 84112 USA.
    Visualization in Astrophysics: Developing New Methods, Discovering Our Universe, and Educating the Earth2021In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 40, no 3, p. 635-663Article in journal (Refereed)
    Abstract [en]

    We present a state-of-the-art report on visualization in astrophysics. We survey representative papers from both astrophysics and visualization and provide a taxonomy of existing approaches based on data analysis tasks. The approaches are classified based on five categories: data wrangling, data exploration, feature identification, object reconstruction, as well as education and outreach. Our unique contribution is to combine the diverse viewpoints from both astronomers and visualization experts to identify challenges and opportunities for visualization in astrophysics. The main goal is to provide a reference point to bring modern data analysis and visualization techniques to the rich datasets in astrophysics.

    Download full text (pdf)
    fulltext
  • 33.
    Lindholm, Stefan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Falk, Martin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Sundén, Erik
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bock, Alexander
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Hybrid Data Visualization Based On Depth Complexity Histogram Analysis2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 1, p. 74-85Article in journal (Refereed)
    Abstract [en]

    In many cases, only the combination of geometric and volumetric data sets is able to describe a single phenomenon under observation when visualizing large and complex data. When semi-transparent geometry is present, correct rendering results require sorting of transparent structures. Additional complexity is introduced as the contributions from volumetric data have to be partitioned according to the geometric objects in the scene. The A-buffer, an enhanced framebuffer with additional per-pixel information, has previously been introduced to deal with the complexity caused by transparent objects. In this paper, we present an optimized rendering algorithm for hybrid volume-geometry data based on the A-buffer concept. We propose two novel components for modern GPUs that tailor memory utilization to the depth complexity of individual pixels. The proposed components are compatible with modern A-buffer implementations and yield performance gains of up to eight times compared to existing approaches through reduced allocation and reuse of fast cache memory. We demonstrate the applicability of our approach and its performance with several examples from molecular biology, space weather, and medical visualization containing both, volumetric data and geometric structures.

    Download full text (pdf)
    fulltext
  • 34.
    Lindholm, Stefan
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ljung, Patric
    Siemens Corporate Research.
    Hadwiger, Markus
    VRVis Research Center.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Fused Multi-Volume DVR using Binary Space Partitioning2009In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 28, no 3, p. 847-854Article in journal (Refereed)
    Abstract [en]

    Multiple-volume visualization is a growing field in medical imaging providing simultaneous exploration of volumes acquired from varying modalities. However, high complexity results in an increased strain on performance compared to single volume rendering as scenes may consist of volumes with arbitrary orientations and rendering is performed with varying sample densities. Expensive image order techniques such as depth peeling have previously been used to perform the necessary calculations. In. this work we present a view-independent region based scene description for multi-volume pipelines. Using Binary Space Partitioning we are able to create a simple interface providing all required information for advanced multi-volume renderings while introducing a minimal overhead for scenes with few volumes. The modularity of our solution is demonstrated by the use of visual development and performance is documented with benchmarks and real-time simulations.

  • 35.
    Ljung, Patric
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Krueger, Jens
    University of Duisburg Essen, Germany; University of Utah, UT 84112 USA.
    Groeller, Eduard
    TU Wien, Austria; University of Bergen, Norway.
    Hadwiger, Markus
    King Abdullah University of Science and Technology, Saudi Arabia.
    Hansen, Charles D.
    University of Utah, UT 84112 USA.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    State of the Art in Transfer Functions for Direct Volume Rendering2016In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 35, no 3, p. 669-691Article in journal (Refereed)
    Abstract [en]

    A central topic in scientific visualization is the transfer function (TF) for volume rendering. The TF serves a fundamental role in translating scalar and multivariate data into color and opacity to express and reveal the relevant features present in the data studied. Beyond this core functionality, TFs also serve as a tool for encoding and utilizing domain knowledge and as an expression for visual design of material appearances. TFs also enable interactive volumetric exploration of complex data. The purpose of this state-of-the-art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations. The STAR classifies TF research into the following aspects: dimensionality, derived attributes, aggregated attributes, rendering aspects, automation, and user interfaces. The STAR concludes with some interesting research challenges that form the basis of an agenda for the development of next generation TF tools and methodologies.

    Download full text (pdf)
    fulltext
  • 36.
    Maisch, Sebastian
    et al.
    Ulm Univ, Germany.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Ulm Univ, Germany.
    Interactive Subsurface Scattering for Materials With High Scattering Distances2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 6, p. 465-479Article in journal (Refereed)
    Abstract [en]

    Existing algorithms for rendering subsurface scattering in real time cannot deal well with scattering over longer distances. Kernels for image space algorithms become very large in these circumstances and separation does not work anymore, while geometry-based algorithms cannot preserve details very well. We present a novel approach that deals with all these downsides. While for lower scattering distances, the advantages of geometry-based methods are small, this is not the case anymore for high scattering distances (as we will show). Our proposed method takes advantage of the highly detailed results of image space algorithms and combines it with a geometry-based method to add the essential scattering from sources not included in image space. Our algorithm does not require pre-computation based on the scenes geometry, it can be applied to static and animated objects directly. Our method is able to provide results that come close to ray-traced images which we will show in direct comparisons with images generated by PBRT. We will compare our results to state of the art techniques that are applicable in these scenarios and will show that we provide superior image quality while maintaining interactive rendering times.

    Download full text (pdf)
    fulltext
  • 37.
    Markuš, Nenad
    et al.
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Fratarcangeli, Marco
    Chalmers University of Technology, Dept. of Applied Information Technology, Göteborg, Sweden.
    Pandžić, Igor
    University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Fast Rendering of Image Mosaics and ASCII Art2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 6, p. 251-261Article in journal (Refereed)
    Abstract [en]

    An image mosaic is an assembly of a large number of small images, usually called tiles, taken from a specific dictionary/codebook. When viewed as a whole, the appearance of a single large image emerges, i.e. each tile approximates a small block of pixels. ASCII art is a related (and older) graphic design technique for producing images from printable characters. Although automatic procedures for both of these visualization schemes have been studied in the past, some are computationally heavy and cannot offer real-time and interactive performance. We propose an algorithm able to reproduce the quality of existing non-photorealistic rendering techniques, in particular ASCII art and image mosaics, obtaining large performance speed-ups. The basic idea is to partition the input image into a rectangular grid and use a decision tree to assign a tile from a pre-determined codebook to each cell. Our implementation can process video streams from webcams in real time and it is suitable for modestly equipped devices. We evaluate our technique by generating the renderings of a variety of images and videos, with good results. The source code of our engine is publicly available.

  • 38.
    Museth, Ken
    et al.
    Linköping University, Department of Science and Technology, Digital Media. Linköping University, The Institute of Technology.
    Breen, D.E.
    Drexel University.
    Whitaker, R.T.
    University of Utah.
    Mauch, S.
    California Institute of Technology.
    Johnson, D.
    University of Utah.
    Algorithms for interactive editing of level set models2005In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 24, no 4, p. 821-841Article in journal (Refereed)
    Abstract [en]

    Level set models combine a low-level volumetric representation, the mathematics of deformable implicit surfaces and powerful, robust numerical techniques to produce a novel approach to shape design. While these models offer many benefits, their large-scale representation and numerical requirements create significant challenges when developing an interactive system. This paper describes the collection of techniques and algorithms (some new, some pre-existing) needed to overcome these challenges and to create an interactive editing system for this new type of geometric model. We summarize the algorithms for producing level set input models and, more importantly, for localizingminimizing computation during the editing process. These algorithms include distance calculations, scan conversion, closest point determination, fast marching methods, bounding box creation, fast and incremental mesh extraction, numerical integration and narrow band techniques. Together these algorithms provide the capabilities required for interactive editing of level set models. © The Eurographics Association and Blackwell Publishing Ltd 2005.

  • 39.
    Parulek, Julius
    et al.
    University of Bergen, Norway.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bruckner, Stefan
    University of Bergen, Norway.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Viola, Ivan
    University of Bergen, Norway; Vienna University of Technology, Austria.
    Continuous Levels-of-Detail and Visual Abstraction for Seamless Molecular Visualization2014In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, no 6, p. 276-287Article in journal (Refereed)
    Abstract [en]

    Molecular visualization is often challenged with rendering of large molecular structures in real time. We introduce a novel approach that enables us to show even large protein complexes. Our method is based on the level-of-detail concept, where we exploit three different abstractions combined in one visualization. Firstly, molecular surface abstraction exploits three different surfaces, solvent-excluded surface (SES), Gaussian kernels and van der Waals spheres, combined as one surface by linear interpolation. Secondly, we introduce three shading abstraction levels and a method for creating seamless transitions between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation of a cluster of atoms with constant shading and without contours provide the context. Thirdly, we propose a hierarchical abstraction based on a set of clusters formed on molecular atoms. All three abstraction models are driven by one importance function classifying the scene into the near-, mid- and far-field. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts.

    Download full text (pdf)
    fulltext
  • 40.
    Sereno, Mickael
    et al.
    Université Paris-Saclay, CNRS, Inria, LISN, France.
    Gosset, Stéphane
    Université Paris-Saclay, CNRS, Inria, LISN, France.
    Besançon, Lonni
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Isenberg, Tobias
    Université Paris-Saclay, CNRS, Inria, LISN, France.
    Hybrid Touch/Tangible Spatial Selection in Augmented Reality2022In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 41, no 3, p. 403-415Article in journal (Refereed)
    Abstract [en]

    We study tangible touch tablets combined with Augmented Reality Head-Mounted Displays (AR-HMDs) to perform spatial 3D selections. We are primarily interested in the exploration of 3D unstructured datasets such as cloud points or volumetric datasets. AR-HMDs immerse users by showing datasets stereoscopically, and tablets provide a set of 2D exploration tools. Because AR-HMDs merge the visualization, interaction, and the users' physical spaces, users can also use the tablets as tangible objects in their 3D space. Nonetheless, the tablets' touch displays provide their own visualization and interaction spaces, separated from those of the AR-HMD. This raises several research questions compared to traditional setups. In this paper, we theorize, discuss, and study different available mappings for manual spatial selections using a tangible tablet within an AR-HMD space. We then study the use of this tablet within a 3D AR environment, compared to its use with a 2D external screen.

  • 41.
    Sidwall Thygesen, Signe
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Masood, Talha Bin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Linares, Mathieu
    Linköping University, Department of Science and Technology, Laboratory of Organic Electronics. Linköping University, Faculty of Science & Engineering.
    Natarajan, Vijay
    Indian Inst Sci, India.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Level of Detail Exploration of Electronic Transition Ensembles using Hierarchical Clustering2022In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 41, no 3, p. 333-344Article in journal (Refereed)
    Abstract [en]

    We present a pipeline for the interactive visual analysis and exploration of molecular electronic transition ensembles. Each ensemble member is specified by a molecular configuration, the charge transfer between two molecular states, and a set of physical properties. The pipeline is targeted towards theoretical chemists, supporting them in comparing and characterizing electronic transitions by combining automatic and interactive visual analysis. A quantitative feature vector characterizing the electron charge transfer serves as the basis for hierarchical clustering as well as for the visual representations. The interface for the visual exploration consists of four components. A dendrogram provides an overview of the ensemble. It is augmented with a level of detail glyph for each cluster. A scatterplot using dimensionality reduction provides a second visualization, highlighting ensemble outliers. Parallel coordinates show the correlation with physical parameters. A spatial representation of selected ensemble members supports an in-depth inspection of transitions in a form that is familiar to chemists. All views are linked and can be used to filter and select ensemble members. The usefulness of the pipeline is shown in three different case studies.

    Download full text (pdf)
    fulltext
  • 42.
    Tsirikoglou, Apostolia
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Eilertsen, Gabriel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Survey of Image Synthesis Methods for Visual Machine Learning2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 6, p. 426-451Article in journal (Refereed)
    Abstract [en]

    Image synthesis designed for machine learning applications provides the means to efficiently generate large quantities of training data while controlling the generation process to provide the best distribution and content variety. With the demands of deep learning applications, synthetic data have the potential of becoming a vital component in the training pipeline. Over the last decade, a wide variety of training data generation methods has been demonstrated. The potential of future development calls to bring these together for comparison and categorization. This survey provides a comprehensive list of the existing image synthesis methods for visual machine learning. These are categorized in the context of image generation, using a taxonomy based on modelling and rendering, while a classification is also made concerning the computer vision applications they are used. We focus on the computer graphics aspects of the methods, to promote future image generation for machine learning. Finally, each method is assessed in terms of quality and reported performance, providing a hint on its expected learning potential. The report serves as a comprehensive reference, targeting both groups of the applications and data development sides. A list of all methods and papers reviewed herein can be found at https://computergraphics.on.liu.se/image_synthesis_methods_for_visual_machine_learning/.

    Download full text (pdf)
    fulltext
  • 43.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Per, Larsson
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Free Form Incident Light Fields2008In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 27, no 4, p. 1293-1301Article in journal (Refereed)
    Abstract [en]

    This paper presents methods for photo-realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4-D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.

    Download full text (pdf)
    preprint
  • 44.
    Wang, Rui
    et al.
    University of Massachusetts.
    Åkerlund, Oskar
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Bidirectional Importance Sampling for Unstructured Direct Illumination2009In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 28, no 2, p. 269-278Article in journal (Refereed)
    Abstract [en]

    Recent research in bidirectional importance sampling has focused primarily on structured illumination sources such as distant environment maps, while unstructured illumination has received little attention. In this paper, we present a method for bidirectional importance sampling of unstructured illumination, allowing us to use the same method for sampling both distant as well as local/indirect sources. Building upon recent work in [WFA*05], we model complex illumination as a large set of point lights. The subsequent sampling process draws samples only from this point set. We start by constructing a piecewise constant approximation for the lighting using an illumination cut [CPWAP08]. We show that this cut can be used directly for illumination importance sampling. We then use BRDF importance sampling followed by sample counting to update the cut, resulting in a bidirectional distribution that closely approximates the product of the illumination and BRDF. Drawing visibility samples from this new distribution significantly reduces the sampling variance. As a main advance over previous work, our method allows for unstructured sources, including arbitrary local direct lighting and one-bounce of indirect lighting.

  • 45.
    Wang, Xiyao
    et al.
    INRIA, France; Univ Paris Saclay, France.
    Besancon, Lonni
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ammi, Mehdi
    Univ Paris 08, France.
    Isenberg, Tobias
    INRIA, France.
    Augmenting Tactile 3D Data Navigation With Pressure Sensing2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, no 3, p. 635-647Article in journal (Refereed)
    Abstract [en]

    We present a pressure-augmented tactile 3D data navigation technique, specifically designed for small devices, motivated by the need to support the interactive visualization beyond traditional workstations. While touch input has been studied extensively on large screens, current techniques do not scale to small and portable devices. We use phone-based pressure sensing with a binary mapping to separate interaction degrees of freedom (DOF) and thus allow users to easily select different manipulation schemes (e. g., users first perform only rotation and then with a simple pressure input to switch to translation). We compare our technique to traditional 3D-RST (rotation, scaling, translation) using a docking task in a controlled experiment. The results show that our technique increases the accuracy of interaction, with limited impact on speed. We discuss the implications for 3D interaction design and verify that our results extend to older devices with pseudo pressure and are valid in realistic phone usage scenarios.

  • 46.
    Yan, Lin
    et al.
    Scientific Computing and Imaging Institute, University of Utah, USA.
    Masood, Talha Bin
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sridharamurthy, Raghavendra
    Department of Computer Science and Automation, Indian Institute of Science Bangalore, India.
    Rasheed, Farhan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Natarajan, Vijay
    Department of Computer Science and Automation, Indian Institute of Science Bangalore, India.
    Hotz, Ingrid
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Wang, Bei
    Scientific Computing and Imaging Institute, University of Utah, USA.
    Scalar Field Comparison with Topological Descriptors: Properties and Applications for Scientific Visualization2021In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 40, no 3, p. 599-633Article in journal (Refereed)
    Abstract [en]

    In topological data analysis and visualization, topological descriptors such as persistence diagrams, merge trees, contour trees, Reeb graphs, and Morse–Smale complexes play an essential role in capturing the shape of scalar field data. We present a state-of-the-art report on scalar field comparison using topological descriptors. We provide a taxonomy of existing approaches based on visualization tasks associated with three categories of data: single fields, time-varying fields, and ensembles. These tasks include symmetry detection, periodicity detection, key event/feature detection, feature tracking, clustering, and structure statistics. Our main contributions include the formulation of a set of desirable mathematical and computational properties of comparative measures, and the classification of visualization tasks and applications that are enabled by these measures.

    Download full text (pdf)
    fulltext
  • 47.
    Zohrevandi, Elmira
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Westin, Carl
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Design and Evaluation Study of Visual Analytics Decision Support Tools in Air Traffic Control2022In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 41, no 1, p. 230-242Article in journal (Refereed)
    Abstract [en]

    Operators in air traffic control facing time- and safety-critical situations call for efficient, reliable and robust real-time processing and interpretation of complex data. Automation support tools aid controllers in these processes to prevent separation losses between aircraft. Issues of current support tools include limited what-if and what-else probe functionalities in relation to vertical solutions. This work presents the design and evaluation of two visual analytics interfaces that promote contextual awareness and support what-if and what-else probes in the spatio-temporal domain aiming to improve information integration and support controllers in prioritising conflict resolution. Both interfaces visualize vertical solution spaces against a time-altitude graph. The main contributions of this paper are: (a) the presentation of two interfaces for supporting conflict solving; (b) the novel representation of how vertical information and aircraft rate of climb and descent affect conflicts and (c) an evaluation and comparison of the interfaces with a traditional air traffic control support system. The evaluation study was performed with domain experts to compare the effects of visualization concepts on operator engagement in processing solutions suggested by the tools. Results show that the visualizations support operators ability to understand and resolve conflicts. Based on the results, general design guidelines for time-critical domains are proposed.

    Download full text (pdf)
    fulltext
  • 48.
    Zohrevandi, Elmira
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Westin, Carl
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Vrotsou, Katerina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Exploring Effects of Ecological Visual Analytics Interfaces on Experts' and Novices' Decision‐Making Processes: A Case Study in Air Traffic Control2022In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 41, no 3, p. 453-464Article in journal (Refereed)
    Abstract [en]

    Operational demands in safety-critical systems impose a risk of failure to the operators especially during urgent situations. Operators of safety-critical systems learn to make decisions effectively throughout extensive training programs and many years of experience. In the domain of air traffic control, expensive training with high dropout rates calls for research to enhance novices' ability to detect and resolve conflicts in the airspace. While previous researchers have mostly focused on redesigning training instructions and programs, the current paper explores possible benefits of novel visual representations to improve novices' understanding of the situations as well as their decision-making process. We conduct an experimental evaluation study testing two ecological visual analytics interfaces, developed in a previous study, as support systems to facilitate novice decision-making. The main contribution of this paper is threefold. First, we describe the application of an ecological interface design approach to the development of two visual analytics interfaces. Second, we perform a human-in-the-loop experiment with forty-five novices within a simplified air traffic control simulation environment. Third, by performing an expert-novice comparison we investigate the extent to which effects of the proposed interfaces can be attributed to the subjects' expertise. The results show that the proposed ecological visual analytics interfaces improved novices' understanding of the information about conflicts as well as their problem-solving performance. Further, the results show that the beneficial effects of the proposed interfaces were more attributable to the visual representations than the users' expertise. 

    Download full text (pdf)
    fulltext
1 - 48 of 48
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf