liu.seSearch for publications in DiVA
Change search
Refine search result
12 51 - 94 of 94
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Lindstrand, Mikael
    Linköping University, Department of Science and Technology, Digital Media. Linköping University, The Institute of Technology. GonioLabs, Åkroken Science Park, Sundsvall Sweden.
    Forensic DOVID reader, bridging 1st, 2nd and 3rd line inspection2010In: Optical Document Security II: Conference on Optical Security and Counterfeit Deterrence 2010, Reconnaissance International , 2010, p. 347-356Conference paper (Refereed)
    Abstract [en]

    GonioLabs’ DOVID Reader (GDR) provides a spatially resolved trichromatic goniophotometric characterization. The present work discusses how the GDR quantifies optical features and differences between them (defects), as appreciated by perceptual evaluation. GDR advantages are a) evaluating different DOVID suppliers, b) gaining more detailed understanding of deterioration due to circulation, c) forensic evaluation of different groups of counterfeits potentially originating from the same production equipment

  • 52.
    Lindstrand, Mikael
    GonioLabs, Åkroken Science Park, Sundsvall Sweden.
    Spatially and angularly resolved high dynamic range reflectance measurements for forensic document inspection2008In: Optical Document Security: Conference on Optical Security and Counterfeit Deterrence 2008, Reconnaissance International , 2008, p. 223-235Conference paper (Refereed)
    Abstract [en]

    Detailed optical characterization of the relevant features of optically variable devices (OVD) is in general a challenge. Adding to the in general high both spatial and angular resolved characteristics, the high dynamic range of reflection, including high intensity specular reflections and low intensity reflections of other geometries make the metrology even more demanding. A trichromatic reflectance measurement service recently made commercially available, addresses the stated challenges. The resulting data may be described as a spatially resolved trichromatic goniophotometric characterization. The massive data set generated may be approached either by developed visualization tools (static images and dynamic videos) or processed mathematically into higher-order characterizations. The number of potential relevant applications is abundant – one example may be a detailed angular characterization of the iridescent color shifts; how the iridescence function is distorted due to the topography of the underlying paper. The application is demanding in part because the analysis involves spatial and angular dimensions over a high dynamic range of reflectance (specular and non-specular). Besides the OVD, there are other optical document security functions that are challenging to characterize in detail, and where this novel service may prove a useful tool for the professionals in the business.

  • 53.
    Lindstrand, Mikael
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology. Skogsindustrins tekniska forskningsinstitut.
    Kruse, Björn
    Linköping University, Department of Science and Technology, Digital Media. Linköping University, The Institute of Technology.
    Information Capacity Revisited - Reflections on Print Quality2000In: Advances in Printing Science and Technology / [ed] J. A. Bristow, Leatherhead, UK: Pira International , 2000, Vol. 26, p. 175-184Conference paper (Refereed)
    Abstract [en]

    The advent of digital printing stresses the importance of quality measurement. Print quality measures that give an objective value have long been sought. The aim is 10 be able to dcscrihe objectively the properties of print on paper so that conclusions regarding the apparent quality can be drawn. In this paper, we discuss the measures proposed in the literature and relate them to the subjective quality. Several years ago, the concept of ‘information capacity’ was proposed as such a measure. Some reflections are made in the light of coding. and telecommunication theory. areas in which in information capacity is a vital tool. On the other hand. for the print quality applications. the interest in in formation capacity has in fact been decreasing. An obvious question therefore arises and is treated in this work: Why have we seen so few applications of this theory for practical applications in the area of print? Is the area of print quality so very different from the areas or image compression and telecommunication?

  • 54.
    Ljung, Patric
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Winskog, Calle
    Persson, Anders
    Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Östergötlands Läns Landsting, Center for Diagnostics, Department of Radiology in Linköping.
    Lundström, Claes
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Full Body Virtual Autopsies using a State-of-the-art Volume Rendering Pipeline2006In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, Vol. 12, no 5, p. 869-876Article in journal (Refereed)
    Abstract [en]

    This paper presents a procedure for virtual autopsies based on interactive 3D visualizations of large scale, high resolution data from CT-scans of human cadavers. The procedure is described using examples from forensic medicine and the added value and future potential of virtual autopsies is shown from a medical and forensic perspective. Based on the technical demands of the procedure state-of-the-art volume rendering techniques are applied and refined to enable real-time, full body virtual autopsies involving gigabyte sized data on standard GPUs. The techniques applied include transfer function based data reduction using levelof- detail selection and multi-resolution rendering techniques. The paper also describes a data management component for large, out-of-core data sets and an extension to the GPU-based raycaster for efficient dual TF rendering. Detailed benchmarks of the pipeline are presented using data sets from forensic cases.

  • 55.
    Lundberg, Lukas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Art Directed Fluid Flow With Secondary Water Effects2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes methods for applying secondary water effects as spray, foam, splashes and mist to a fluid simulation system. For an art direction control over the base fluid flow a Fluid Implicit Particle solver with custom fields is also presented. The methods build upon production techniques within the visual effects industry, fluid dynamics and relevant computer graphics research. The implementation of the methods is created within Side Effects Software Houdini.

  • 56.
    Löwgren, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Technical communication practices in the collaborative mediascape: A case study in media structure transformation2016In: Communication Design Quarterly Review, ISSN 2166-1200, Vol. 4, no 3, p. 20-25Article in journal (Refereed)
    Abstract [en]

    Professional practices in technical communication are increasingly being challenged by the emergence of collaborative media that enable users to access technical information created by non- professionals. At the same time, these technologies also allow technical communicators to provide a continually expanding audience with knowledge and skills needed now more than ever. Through a co-design case study, researchers developed a new and innovative platform for producing and distributing technical information including user-generated content. Moreover, the events of the case included market strategies in which a professional organization moved from a reactive to a more proactive position on collaborative media. In so doing, they outlined a set of new professional roles for technical communicators including editors, curators, facilitators, and community managers.

  • 57.
    Manker, Jon
    et al.
    Södertörns högskola, Institutionen för kommunikation, medier och IT, Medieteknik.
    Arvola, Mattias
    Linköping University, Department of Computer and Information Science, MDALAB - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Prototyping in game design: Externalization and internalization of game ideas2011In: HCI 2011: Health, Wealth & Happiness: The 25th BCS Conference on Human-Computer Interaction. Newcastle Upon Tyne, UK, July 4-8, 2011., 2011Conference paper (Refereed)
    Abstract [en]

    Prototyping is a well-studied activity for interaction designers, but its role in computer game design is relatively unexplored. The aim of this study is to shed light on prototyping in game design. Interviews were conducted with 27 game designers. The empirical data was structured using qualitative content analysis and analysed using the design version of The Activity Checklist. The analysis indicated that six categories of the checklist were significant for the data obtained. Thesecategories are presented in relation to the data. The roles of externalization and internalization are specifically highlighted.

  • 58.
    Manuylova, Ekaterina
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Investigations of stereo setup for Kinect2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The main purpose of this work is to investigate the behavior of the recently released by Microsoft company the Kinect sensor, which contains the properties that go beyond ordinary cameras. Normally, in order to create a 3D reconstruction of the scene two cameras are required. Whereas, the Kinect device, due to the properties of the Infrared projector and sensor allows to create the same type of the reconstruction using only one device. However, the depth images, which are generated by the Infrared laser projector and monochrome sensor in Kinect can contain undefined values. Therefore, in addition to other investigations this project contains an idea how to improve the quality of the depth images. However, the base aim of this work is to perform a reconstruction of the scene based on the color images using pair of Kinects which will be compared with the results generated by using depth information from one Kinect. In addition, the report contains the information how to check that all the performed calculations were done correctly. All  the algorithms which were used in the project as well as the achieved results will be described and discussed in the separate chapters in the current report.

  • 59.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sparse representation of visual data for compression and compressed sensing2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The ongoing advances in computational photography have introduced a range of new imaging techniques for capturing multidimensional visual data such as light fields, BRDFs, BTFs, and more. A key challenge inherent to such imaging techniques is the large amount of high dimensional visual data that is produced, often requiring GBs, or even TBs, of storage. Moreover, the utilization of these datasets in real time applications poses many difficulties due to the large memory footprint. Furthermore, the acquisition of large-scale visual data is very challenging and expensive in most cases. This thesis makes several contributions with regards to acquisition, compression, and real time rendering of high dimensional visual data in computer graphics and imaging applications.

    Contributions of this thesis reside on the strong foundation of sparse representations. Numerous applications are presented that utilize sparse representations for compression and compressed sensing of visual data. Specifically, we present a single sensor light field camera design, a compressive rendering method, a real time precomputed photorealistic rendering technique, light field (video) compression and real time rendering, compressive BRDF capture, and more. Another key contribution of this thesis is a general framework for compression and compressed sensing of visual data, regardless of the dimensionality. As a result, any type of discrete visual data with arbitrary dimensionality can be captured, compressed, and rendered in real time.

    This thesis makes two theoretical contributions. In particular, uniqueness conditions for recovering a sparse signal under an ensemble of multidimensional dictionaries is presented. The theoretical results discussed here are useful for designing efficient capturing devices for multidimensional visual data. Moreover, we derive the probability of successful recovery of a noisy sparse signal using OMP, one of the most widely used algorithms for solving compressed sensing problems.

    List of papers
    1. OMP-based DOA estimation performance analysis
    Open this publication in new window or tab >>OMP-based DOA estimation performance analysis
    2018 (English)In: Digital signal processing (Print), ISSN 1051-2004, E-ISSN 1095-4333, Vol. 79, p. 57-65Article in journal (Refereed) Published
    Abstract [en]

    In this paper, we present a new performance guarantee for Orthogonal Matching Pursuit (OMP) in the context of the Direction Of Arrival (DOA) estimation problem. For the first time, the effect of parameters such as sensor array configuration, as well as signal to noise ratio and dynamic range of the sources is thoroughly analyzed. In particular, we formulate a lower bound for the probability of detection and an upper bound for the estimation error. The proposed performance guarantee is further developed to include the estimation error as a user-defined parameter for the probability of detection. Numerical results show acceptable correlation between theoretical and empirical simulations. (C) 2018 Elsevier Inc. All rights reserved.

    Place, publisher, year, edition, pages
    ACADEMIC PRESS INC ELSEVIER SCIENCE, 2018
    Keywords
    Direction of arrival; Orthogonal Matching Pursuit (OMP); Mutual coherence; Array configuration
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-149841 (URN)10.1016/j.dsp.2018.04.006 (DOI)000437386200006 ()
    Available from: 2018-08-02 Created: 2018-08-02 Last updated: 2018-11-23
    2. On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
    Open this publication in new window or tab >>On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
    2017 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed) Published
    Abstract [en]

    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

    Place, publisher, year, edition, pages
    IEEE Signal Processing Society, 2017
    Keywords
    Compressed Sensing (CS), Sparse Recovery, Orthogonal Matching Pursuit (OMP), Mutual Coherence
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-141613 (URN)10.1109/LSP.2017.2753939 (DOI)000412501600001 ()
    Available from: 2017-10-03 Created: 2017-10-03 Last updated: 2018-11-23Bibliographically approved
    3. ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
    Open this publication in new window or tab >>ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
    2016 (English)In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

    Place, publisher, year, edition, pages
    IEEE, 2016
    Series
    IEEE International Conference on Image Processing ICIP, ISSN 1522-4880
    Keywords
    compressed sensing; image completion; nonlocal; inverse problems; uniqueness conditions
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-134107 (URN)10.1109/ICIP.2016.7532813 (DOI)000390782002114 ()978-1-4673-9961-6 (ISBN)
    Conference
    23rd IEEE International Conference on Image Processing (ICIP)
    Available from: 2017-01-22 Created: 2017-01-22 Last updated: 2018-11-23
    4. Compressive Image Reconstruction in Reduced Union of Subspaces
    Open this publication in new window or tab >>Compressive Image Reconstruction in Reduced Union of Subspaces
    2015 (English)In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed) Published
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

    Place, publisher, year, edition, pages
    John Wiley & Sons Ltd, 2015
    Keywords
    Image reconstruction, compressed sensing, light field imaging
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-119639 (URN)10.1111/cgf.12539 (DOI)000358326600008 ()
    Conference
    Eurographics 2015
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081
    Available from: 2015-06-23 Created: 2015-06-23 Last updated: 2018-11-23Bibliographically approved
    5. Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
    Open this publication in new window or tab >>Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
    2013 (English)In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper, Published paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

    Place, publisher, year, edition, pages
    ACM Press, 2013
    Keywords
    computer graphics, global illumination, real-time, machine learning
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-99433 (URN)10.1145/2542355.2542385 (DOI)978-1-4503-2629-2 (ISBN)
    Conference
    SIGGRAPH Asia, 19-22 November 2013, Hong Kong
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council
    Available from: 2013-10-17 Created: 2013-10-17 Last updated: 2018-11-23Bibliographically approved
  • 60.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Surface Light Field Generation, Compression and Rendering2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We present a framework for generating, compressing and rendering of SurfaceLight Field (SLF) data. Our method is based on radiance data generated usingphysically based rendering methods. Thus the SLF data is generated directlyinstead of re-sampling digital photographs. Our SLF representation decouplesspatial resolution from geometric complexity. We achieve this by uniform samplingof spatial dimension of the SLF function. For compression, we use ClusteredPrincipal Component Analysis (CPCA). The SLF matrix is first clustered to lowfrequency groups of points across all directions. Then we apply PCA to eachcluster. The clustering ensures that the within-cluster frequency of data is low,allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstructiontechnique ensures seamless reconstruction of discrete SLF data. We applied ourrendering method for fast, high quality off-line rendering and real-time illuminationof static scenes. The proposed framework is not limited to complexity of materialsor light sources, enabling us to render high quality images describing the full globalillumination in a scene.

  • 61.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Geometry Independent Surface Light Fields for Real TimeRendering of Precomputed Global Illumination2011In: Proceedings of SGRAD 2011 / [ed] Thomas Larsson, Lars Kjelldahl, Kai-Mikael Jää-Aro, Royal Institute of Technology, Stockholm, 2011, p. 27-34Conference paper (Refereed)
    Abstract [en]

    We present a framework for generating, compressing and rendering of Surface Light Field (SLF) data. Our methodis based on radiance data generated using physically based rendering methods. Thus the SLF data is generateddirectly instead of re-sampling digital photographs. Our SLF representation decouples spatial resolution fromgeometric complexity. We achieve this by uniform sampling of spatial dimension of the SLF function. For compression,we use Clustered Principal Component Analysis (CPCA). The SLF matrix is first clustered to low frequencygroups of points across all directions. Then we apply PCA to each cluster. The clustering ensures that the withinclusterfrequency of data is low, allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstruction technique ensures seamlessreconstruction of discrete SLF data. We applied our rendering method for fast, high quality off-line rendering andreal-time illumination of static scenes. The proposed framework is not limited to complexity of materials or lightsources, enabling us to render high quality images describing the full global illumination in a scene.

  • 62.
    Miandji, Ehsan
    et al.
    Sharif University of Technology, Tehran, Iran.
    Sargazi Moghaddam, Mohammad Hadi
    Sharif University of Technology, Tehran, Iran.
    Samavati, Faramarz
    University of Calgary, Calgary, Alberta, Canada.
    Emadi, Mohammad
    Sharif University of Technology, Tehran, Iran.
    Real-time multi-band synthesis of ocean water with new iterative up-sampling technique2009In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 25, no 5-7, p. 697-705Article in journal (Refereed)
    Abstract [en]

    Adapting natural phenomena rendering for realtime applications has become a common practice in computer graphics. We propose a GPU-based multi-band methodfor optimized synthesis of “far from coast” ocean waves using an empirical Fourier domain model. Instead of performing two independent syntheses for low- and high-band frequencies of ocean waves, we perform only low-band synthesis and employ results to reproduce high frequency details ofocean surface by an optimized iterative up-sampling stage.Our experimental results show that this approach greatlyimproves the performance of original multi-band synthesiswhile maintaining image quality.

  • 63.
    Mikkelsen, Christine
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Johansson, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Rissanen, Mikko
    Ind. Software Syst., ABB Corp. Res., Västerås.
    Interactive Information Visualization for Sensemaking in Power Grid Supervisory Systems2011In: Proceedings - 15th International Conferenceon Information Visualisation, Los Alamitos, CA, USA: IEEE Computer Society, 2011, p. 119-126Conference paper (Refereed)
    Abstract [en]

    Operators of power grid supervisory control systems have to gather information from a wide variety of views to build situation awareness. Findings from a conducted field study show that this task is challenging and cognitively demanding. Visualization research for power grid supervisory control systems has focused on developing new visualization techniques for representing one aspect of the power system data. Little work has been done to demonstrate how information visualization techniques can support the operator in the sense making process to achieve situation awareness. To fill this gap, and with support from a field study, we propose solutions based on multiple and coordinated views, visual interactive filtering and parallel coordinates.

  • 64.
    Muthumanickam, Prithiviraj
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Forsell, Camilla
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Vrotsou, Katerina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Johansson, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Supporting Exploration of Eye Tracking Data: Identifying Changing Behaviour Over Long Durations2016In: BEYOND TIME AND ERRORS: NOVEL EVALUATION METHODS FOR VISUALIZATION, BELIV 2016, ASSOC COMPUTING MACHINERY , 2016, p. 70-77Conference paper (Refereed)
    Abstract [en]

    Visual analytics of eye tracking data is a common tool for evaluation studies across diverse fields. In this position paper we propose a novel user-driven interactive data exploration tool for understanding the characteristics of eye gaze movements and the changes in these behaviours over time. Eye tracking experiments generate multidimensional scan path data with sequential information. Many mathematical methods in the past have analysed one or a few of the attributes of the scan path data and derived attributes such as Area of Interest (AoI), statistical measures, geometry, domain specific features etc. In our work we are interested in visual analytics of one of the derived attributes of sequential data-the: AoI and the sequences of visits to these AoIs over time. In the case of static stimuli, such as images, or dynamic stimuli, like videos, having predefined or fixed AoIs is not an efficient way of analysing scan path patterns. The AoI of a user over a stimulus may evolve over time and hence determining the AoIs dynamically through temporal clustering could be a better method for analysing the eye gaze patterns. In this work we primarily focus on the challenges in analysis and visualization of the temporal evolution of AoIs. This paper discusses the existing methods, their shortcomings and scope for improvement by adopting visual analytics methods for event-based temporal data to the analysis of eye tracking data.

  • 65.
    Muthumanickam, Prithiviraj
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nordman, Aida
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Meyer, Lothar
    LFV.
    Boonsong, Supathida
    LFV.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Analysis of Long Duration Eye-Tracking Experiments in a Remote Tower Environment2019Conference paper (Refereed)
    Abstract [en]

    Eye-Tracking experiments have proven to be of great assistance in understanding human computer interaction across many fields. Most eye-tracking experiments are non-intrusive and so do not affect the behaviour of the subject. Such experiments usually last for just a few minutes and so the spatio- temporal data generated by the eye-tracker is quite easy to analyze using simple visualization techniques such as heat maps and animation. Eye tracking experiments in air traffic control, or maritime or driving simulators can, however, last for several hours and the analysis of such long duration data becomes much more complex. We have developed an analysis pipeline, where we identify visual spatial areas of attention over a user interface using clustering and hierarchical cluster merging techniques. We have tested this technique on eye tracking datasets generated by air traffic controllers working with Swedish air navigation services, where each eye tracking experiment lasted for ∼90 minutes. We found that our method is interactive and effective in identification of interesting patterns of visual attention that would have been very difficult to locate using manual analysis.

  • 66.
    Namedanian, Mahziar
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Coppel, Ludovic
    Neuman, Magnus
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Edström, Per
    Kolseth, Petter
    Analysis of Optical and Physical Dot Gain by Microscale Image Histogram and Modulation Transfer Functions2013In: Journal of Imaging Science and Technology, ISSN 1062-3701, E-ISSN 1943-3522, Vol. 57, no 2, p. 20504-1-20504-5Article in journal (Refereed)
    Abstract [en]

    The color of a print is affected by ink spreading and lateral light scattering in the substrate, making printed dots appear larger. Characterization of physical and optical dot gain is crucial for the graphic arts and paper industries. We propose a novel approach to separate physical from optical dot gain by use of a high-resolution camera. This approach is based on the histogram of microscale images captured by the camera. Having determined the actual physical dot shape, we estimate the modulation transfer function (MTF) of the paper substrate. The proposed method is validated by comparing the estimated MTF of 11 offset printed coated papers to the MTF obtained from the unprinted papers using measured and Monte Carlo simulated edge responses.

  • 67.
    Namedanian, Mahziar
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Optical Dot Gain Study on Different Halftone Dot Shapes2013Conference paper (Refereed)
  • 68.
    Nguyen, Phong Hai
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Statistical flow data applied to visual analytics2011Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    Statistical flow data such as commuting, migration, trade and money flows has gained manyinterests from policy makers, city planners, researchers and ordinary citizens as well. Therehave appeared numerous statistical data visualisations; however, there is a shortage of applicationsfor visualising flow data. Moreover, among these rare applications, some are standaloneand only for expert usages, some do not support interactive functionalities, and somecan only provide an overview of data. Therefore, in this thesis, I develop a web-enabled,highly interactive and analysis support statistical flow data visualisation application that addressesall those challenges.My application is implemented based on GAV Flash, a powerful interactive visualisationcomponent framework, thus it is inherently web-enabled with basic interactive features. Theapplication uses visual analytics approach that combines both data analysis and interactivevisualisation to solve cluttering issue, the problem of overlapping flows on the display. A varietyof analysis means are provided to analyse flow data efficiently including analysing bothflow directions simultaneously, visualising time-series flow data, finding most attracting regionsand figuring out the reason behind derived patterns. The application also supportssharing knowledge between colleagues by providing story-telling mechanism which allowsusers to create and share their findings as a visualisation story. Last but not least, the applicationenables users to embed the visualisation based on the story into an ordinary web-pageso that public stand a golden chance to derive an insight into officially statistical flow data.

  • 69.
    Ohlsson, Tobias
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Carnstam, Albin
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    A business intelligence application for interactive budget processes2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today budgeting occurs in all types of organizations, from authorities and municipalities, to private companies and non-profit associations. Depending on whether the organization is large or small it can look very different. In large organizations the budget can be such a comprehensive document that it is difficult to keep track of it. Furthermore, in large organizations, the budget work starts very early. Thus, an effective budget process could reduce resources, time and ultimately costs.

    This master’s thesis report describes a budget application built with the Business Intelligence software QlikView. With the application a budgeter can load desired budget data and through a QlikView Extension Object edit the loaded data and finally follow up the work of different budgets. The Extension Object has been implemented using JavaScript and HTML to create a GUI. The edited data is sent to a back-end interface built with one web server and one database server.

    To evaluate the usability of the Extension Object’s GUI and determine how the budget application works and to get feedback on the Extension Object and its functionality, a user study was performed. The result of the user study shows that the application simplifies budget processes and has great potential to help budgeters and controllers to increase their effectiveness.

  • 70. Qu, Yuanyuan
    et al.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Simple Spectral Color Prediction Model using Multiple Characterization Curves2013Conference paper (Refereed)
  • 71.
    Rönnberg, Niklas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sonification supports perception of brightness contrast2019In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 13, p. 373-381, article id 4Article in journal (Refereed)
    Abstract [en]

    In complex visual representations, there are several possible challenges for the visual perception that might be eased by adding sound as a second modality (i.e. sonification). It was hypothesized that sonification would support visual perception when facing challenges such as simultaneous brightness contrast or the Mach band phenomena. This hypothesis was investigated with an interactive sonification test, yielding objective measures (accuracy and response time) as well as subjective measures of sonification benefit. In the test, the participant’s task was to mark the vertical pixel line having the highest intensity level. This was done in a condition without sonification and in three conditions where the intensity level was mapped to different musical elements. The results showed that there was a benefit of sonification, with higher accuracy when sonification was used compared to no sonification. This result was also supported by the subjective measurement. The results also showed longer response times when sonification was used. This suggests that the use and processing of the additional information took more time, leading to longer response times but also higher accuracy. There were no differences between the three sonification conditions.

  • 72.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Johansson, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sonification Support for Information Visualization Dense Data Displays2016In: InfoVis Papers 2016, 2016Conference paper (Refereed)
    Abstract [en]

    This poster presents an experiment designed to evaluate the possible benefits of sonification in information visualization. It is hypothesized, that by using musical sounds for sonification when visualizing complex data, interpretation and comprehension of the visual representation could be increased. In this evaluation of sonification in parallel coordinates and scatter plots, participants had to identify and mark different density areas in the representations. Both quantitative and qualitative results suggest a benefit of sonification. These results indicate that sonification might be useful for data exploration, and give rise to new research questions and challenges.

  • 73.
    Salomonsson, Fredrik
    Linköping University, Department of Science and Technology.
    PIC/FLIP Fluid Simulation Using Block-Optimized Grid Data Structure2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work will examin and present how to implement a Particle-In-Cell and a Fluid-Implicit-Particle (PIC / FLIP) fluid solver that takes advantage of the inherent parallelism of Digital Domain's sparse block optimized data structure, DB-Grid. The methods offer a hybrid approach between particle and grid based simulation.

    This thesis will also discuss and go through different approaches for storing and accessing the data associated with each particle. For dynamically create and remove attributes from the particles, Disney's open source API, Partio is used. Which is also used for saving the particles to disk.

    Finally how to expose C++ classes into Python by wrapping everything into a Python module using the Boost.Python API and discuss the benets of having a script language.

  • 74.
    Samadzadegan, Sepideh
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Automatic and Adaptive Red Eye Detection and Removal: Investigation and Implementation2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Redeye artifact is the most prevalent problem in the flash photography, especially using compact cameras with built-in flash, which bothers both amateur and professional photographers. Hence, removing the affected redeye pixels has become an important skill. This thesis work presents a completely automatic approach for the purpose of redeye detection and removal and it consists of two modules: detection and correction of the redeye pixels in an individual eye, detection of two red eyes in an individual face.This approach is considered as a combination of some of the previous attempts in the area of redeye removal together with some minor and major modifications and novel ideas. The detection procedure is based on the redness histogram analysis followed by two adaptive methods, general and specific approaches, in order to find a threshold point. The correction procedure is a four step algorithm which does not solely rely on the detected redeye pixels. It also applies some more pixel checking, such as enlarging the search area and neighborhood checking, to improve the reliability of the whole procedure by reducing the image degradation risk. The second module is based on a skin-likelihood detection algorithm. A completely novel approach which is utilizing the Golden Ratio in order to segment the face area into some specific regions is implemented in the second module. The proposed method in this thesis work is applied on more than 40 sample images; by considering some requirements and constrains, the achieved results are satisfactory.

  • 75.
    Samini, Ali
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Perspective Correct Hand-held Augmented Reality for Improved Graphics and Interaction2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    With Augmented Reality, also termed AR, a view of the real world is augmented by superimposing computer-generated graphics, thereby enriching or enhancing the perception of the reality. Today, lots of applications benefit from AR in different areas, such as education, medicine, navigation, construction, gaming, and multiple other areas, using primarily head-mounted AR displays and AR on hand-held smart devices. Tablets and phones are highly suitable for AR, as they are equipped with high resolution screens, good cameras and powerful processing units, while being readily available to both industry and home use. They are used with video see-through AR, were the live view of the world is captured by a camera in real time and subsequently presented together with the computer graphics on the display.

    In this thesis I put forth our recent work on improving video see-through Augmented Reality graphics and interaction for hand-held devices by applying and utilizing user perspective. On the rendering side, we introduce a geometry-based user perspective rending method aiming to align the on screen content with the real view of the world visible around the screen. Furthermore, we introduce a device calibration system to compensate for misalignment between system parts. On the interaction side we introduce two wand-like direct 3D pose manipulation techniques based on this user perspective. We also modified a selection technique and introduced a new one suitable to be used with our introduced manipulation techniques. Finally, I present several formal user studies, evaluating the introduced techniques and comparing them with concurrent state-of-the-art alternatives.

    List of papers
    1. A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality
    Open this publication in new window or tab >>A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality
    2014 (English)In: VRST '14 Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, SPRINGER-VERLAG BERLIN , 2014, p. 207-208Conference paper, Published paper (Refereed)
    Abstract [en]

    Video see-through Augmented Reality (V-AR) displays a video feed overlaid with information, co-registered with the displayed objects. In this paper we consider the type of V-AR that is based on a hand-held device with a fixed camera. In most of the VA-R applications the view displayed on the screen is completely determined by the orientation of the camera, i.e., the device-perspective rendering; the screen displays what the camera sees. The alternative method is to use the relative pose of the user's view and the camera, i.e., the user-perspective rendering. In this paper we present an approach to the user perspective V-AR using 3D projective geometry. The view is adjusted to the user's perspective and rendered on the screen, making it an augmented window. We created and tested a running prototype based on our method.

    Place, publisher, year, edition, pages
    SPRINGER-VERLAG BERLIN, 2014
    Keywords
    Augmented Reality; Video see-through; Dynamic frustum; User-perspective
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-123167 (URN)10.1145/2671015.2671127 (DOI)000364709300012 ()978-1-4503-3253-8 (ISBN)
    Conference
    The ACM Symposium on Virtual Reality Software and Technology (VRST) 2014
    Available from: 2015-12-07 Created: 2015-12-04 Last updated: 2018-05-23Bibliographically approved
    2. Device Registration for 3D Geometry-Based User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    Open this publication in new window or tab >>Device Registration for 3D Geometry-Based User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    2015 (English)In: AUGMENTED AND VIRTUAL REALITY, AVR 2015, SPRINGER-VERLAG BERLIN , 2015, Vol. 9254, p. 151-167Conference paper, Published paper (Refereed)
    Abstract [en]

    User-perspective rendering in Video See-through Augmented Reality (V-AR) creates a view that always shows what is behind the screen, from the users point of view. It is used for better registration between the real and virtual world instead of the traditional device-perspective rendering which displays what the camera sees. There is a small number of approaches towards user-perspective rendering that over all improve the registration between the real world, the video captured from real world that is displayed on the screen and the augmentations. There are still some registration errors that cause misalignment in the user-perspective rendering. One source of error is from the device registration which, based on the used tracking method, can be the misalignment between the camera and the screen and also the tracked frame of reference that the screen and the camera are attached to it. In this paper we first describe a method for the user perspective V-AR based on 3D projective geometry. We then address the device registration problem in user perspective rendering by presenting two methods: First, for estimating the misalignment between the camera and the screen. Second, for estimating the misalignment between the camera and the tracked frame.

    Place, publisher, year, edition, pages
    SPRINGER-VERLAG BERLIN, 2015
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 (print), 1611-3349 (online) ; 9254
    Keywords
    Augmented Reality; Video see-through; Dynamic frustum; User-perspective
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-123167 (URN)10.1007/978-3-319-22888-4_12 (DOI)000364709300012 ()978-3-319-22888-4; 978-3-319-22887-7 (ISBN)
    Conference
    2nd International Conference on Augmented and Virtual Reality (SALENTO AVR)
    Available from: 2015-12-07 Created: 2015-12-04 Last updated: 2018-05-23
    3. A User Study on Touch Interaction for User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    Open this publication in new window or tab >>A User Study on Touch Interaction for User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    2016 (English)In: Augmented Reality, Virtual Reality, and Computer Graphics: Third International Conference, AVR 2016, Lecce, Italy, June 15-18, 2016. Proceedings, Part II / [ed] Lucio Tommaso De Paolis, Antonio Mongelli, Springer, 2016, p. 304-317Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents a user study on touch interaction with hand-held Video See-through Augmented Reality (V-AR). In particular, the commonly used Device Perspective Rendering (DPR) is compared with User Perspective Rendering (UPR) with respect to both performance and user experience and preferences. We present two user study tests designed to mimic the tasks that are used in various AR applications.

    Looking for an object and selecting when it’s found, is one of the most used tasks in AR software. Our first test focuses on comparing UPR and DPR in a simple find and selection task. Manipulating the pose of a virtual object is another commonly used task in AR. The second test focuses on multi-touch interaction for 6 DoF object pose manipulation through UPR and DPR.

    Place, publisher, year, edition, pages
    Springer, 2016
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9769
    Keywords
    User perspective rendering, Augmented reality, Touch interaction, Video see-through
    National Category
    Human Computer Interaction Interaction Technologies Computer Sciences Media and Communication Technology Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-132956 (URN)10.1007/978-3-319-40651-0_25 (DOI)000389495700025 ()978-3-319-40651-0 (ISBN)978-3-319-40650-3 (ISBN)
    Conference
    Third International Conference on Augmented Reality, Virtual Reality and Computer Graphics (SALENTO AVR 2016), Otranto, Lecce, Italy, June 15-18, 2016
    Available from: 2016-12-05 Created: 2016-12-05 Last updated: 2018-05-23Bibliographically approved
    4. A study on improving close and distant device movement pose manipulation for hand-held augmented reality
    Open this publication in new window or tab >>A study on improving close and distant device movement pose manipulation for hand-held augmented reality
    2016 (English)In: VRST '16 Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, ACM Press, 2016, p. 121-128Conference paper, Published paper (Refereed)
    Abstract [en]

    Hand-held smart devices are equipped with powerful processing units, high resolution screens and cameras, that in combination makes them suitable for video see-through Augmented Reality. Many Augmented Reality applications require interaction, such as selection and 3D pose manipulation. One way to perform intuitive, high precision 3D pose manipulation is by direct or indirect mapping of device movement.

    There are two approaches to device movement interaction; one fixes the virtual object to the device, which therefore becomes the pivot point for the object, thus makes it difficult to rotate without translate. The second approach avoids latter issue by considering rotation and translation separately, relative to the object's center point. The result of this is that the object instead moves out of view for yaw and pitch rotations.

    In this paper we study these two techniques and compare them with a modification where user perspective rendering is used to solve the rotation issues. The study showed that the modification improves speed as well as both perceived control and intuitiveness among the subjects.

    Place, publisher, year, edition, pages
    ACM Press, 2016
    Keywords
    device interaction, augmented reality, video seethrough, user-perspective, device perspective, user study
    National Category
    Other Engineering and Technologies not elsewhere specified
    Identifiers
    urn:nbn:se:liu:diva-132954 (URN)10.1145/2993369.2993380 (DOI)000391514400018 ()978-1-4503-4491-3 (ISBN)
    Conference
    The 22nd ACM Symposium on Virtual Reality Software and Technology (VRST), Munich, Germany, November 02-04, 2016
    Available from: 2016-12-05 Created: 2016-12-05 Last updated: 2018-05-23Bibliographically approved
    5. Popular Performance Metrics for Evaluation of Interaction in Virtual and Augmented Reality
    Open this publication in new window or tab >>Popular Performance Metrics for Evaluation of Interaction in Virtual and Augmented Reality
    2017 (English)In: 2017 International Conference on Cyberworlds (CW) (2017), IEEE Computer Society, 2017, p. 206-209Conference paper, Published paper (Refereed)
    Abstract [en]

    Augmented and Virtual Reality applications provide environments in which users can immerse themselves in a fully or partially virtual world and interact with virtual objects or user interfaces. User-based, formal evaluation is needed to objectively compare interaction techniques, and find their value in different use cases, and user performance metrics are the key to being able to compare those techniques in a fair and effective manner. In this paper we explore evaluation principles used for or developed explicitly for virtual environments, and survey quality metrics, based on 15 current, important publications on interaction techniques for virtual environments. We check, categorize and analyze the formal user studies, and establish and present baseline performance metrics used for evaluation on interaction techniques in VR and AR.

    Place, publisher, year, edition, pages
    IEEE Computer Society, 2017
    National Category
    Other Engineering and Technologies
    Identifiers
    urn:nbn:se:liu:diva-143586 (URN)10.1109/CW.2017.25 (DOI)978-1-5386-2089-2 (ISBN)978-1-5386-2090-8 (ISBN)
    Conference
    2017 International Conference on Cyberworlds (CW),Chester, United Kingdom Sept. 20, 2017 to Sept. 22, 2017
    Available from: 2017-12-11 Created: 2017-12-11 Last updated: 2018-05-23
  • 76.
    Samini, Ali
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundin Palmerius, Karljohan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality2014In: VRST '14 Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, SPRINGER-VERLAG BERLIN , 2014, p. 207-208Conference paper (Refereed)
    Abstract [en]

    Video see-through Augmented Reality (V-AR) displays a video feed overlaid with information, co-registered with the displayed objects. In this paper we consider the type of V-AR that is based on a hand-held device with a fixed camera. In most of the VA-R applications the view displayed on the screen is completely determined by the orientation of the camera, i.e., the device-perspective rendering; the screen displays what the camera sees. The alternative method is to use the relative pose of the user's view and the camera, i.e., the user-perspective rendering. In this paper we present an approach to the user perspective V-AR using 3D projective geometry. The view is adjusted to the user's perspective and rendered on the screen, making it an augmented window. We created and tested a running prototype based on our method.

  • 77.
    Samini, Ali
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundin Palmerius, Karljohan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Device Registration for 3D Geometry-Based User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality2015In: AUGMENTED AND VIRTUAL REALITY, AVR 2015, SPRINGER-VERLAG BERLIN , 2015, Vol. 9254, p. 151-167Conference paper (Refereed)
    Abstract [en]

    User-perspective rendering in Video See-through Augmented Reality (V-AR) creates a view that always shows what is behind the screen, from the users point of view. It is used for better registration between the real and virtual world instead of the traditional device-perspective rendering which displays what the camera sees. There is a small number of approaches towards user-perspective rendering that over all improve the registration between the real world, the video captured from real world that is displayed on the screen and the augmentations. There are still some registration errors that cause misalignment in the user-perspective rendering. One source of error is from the device registration which, based on the used tracking method, can be the misalignment between the camera and the screen and also the tracked frame of reference that the screen and the camera are attached to it. In this paper we first describe a method for the user perspective V-AR based on 3D projective geometry. We then address the device registration problem in user perspective rendering by presenting two methods: First, for estimating the misalignment between the camera and the screen. Second, for estimating the misalignment between the camera and the tracked frame.

  • 78.
    Schlemmer, Michael
    et al.
    University of Kaiserslautern, Germany.
    Bertram, Martin Hering
    Wirtschaftsmathematik (ITWM) in Kaiserslautern, Germany..
    Hotz, Ingrid
    Berlin (ZIB), FU Berlin, Germany..
    Garth, Christoph
    University of California, Davis, CA..
    Kollmann, Wolfgang
    University of California, Davis, CA..
    Hamann, Bernd
    University of California, Davis, CA..
    Hagen, Hans
    University of Kaiserslautern.
    Moment Invariants for the Analysis of 2D Flow Fields2007In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 13, no 6, p. 1743-1750Article in journal (Refereed)
    Abstract [en]

    We present a novel approach for analyzing two-dimensional (2D) flow field data based on the idea of invariant moments. Moment invariants have traditionally been used in computer vision applications, and we have adapted them for the purpose of interactive exploration of flow field data. The new class of moment invariants we have developed allows us to extract and visualize 2D flow patterns, invariant under translation, scaling, and rotation. With our approach one can study arbitrary flow patterns by searching a given 2D flow data set for any type of pattern as specified by a user. Further, our approach supports the computation of moments at multiple scales, facilitating fast pattern extraction and recognition. This can be done for critical point classification, but also for patterns with greater complexity. This multi-scale moment representation is also valuable for the comparative visualization of flow field data. The specific novel contributions of the work presented are the mathematical derivation of the new class of moment invariants, their analysis regarding critical point features, the efficient computation of a novel feature space representation, and based upon this the development of a fast pattern recognition algorithm for complex flow structures.

  • 79.
    Schminder, Jörg
    et al.
    Linköping University, Department of Management and Engineering, Applied Thermodynamics and Fluid Mechanics.
    Nilsson, Filip
    Linköping University.
    Lundberg, Paulina
    Linköping University.
    Nguyen, Nghiem-Anh
    Linköping University.
    Hag, Christoffer
    Linköping University.
    Nadali Najafabadi, Hossein
    Linköping University, Department of Management and Engineering, Applied Thermodynamics and Fluid Mechanics.
    An IVR Engineering Educational Laboratory Accommodating CDIO Standards2019In: The 15th International CDIO Conference: Proceedings – Full Papers, Aarhus, 2019, p. 647-658Conference paper (Refereed)
    Abstract [en]

    This paper presents the development of an educational immersive virtual reality (IVR) program considering both technological and pedagogical affordances of such learning environments. The CDIO Standards have been used as guidelines to ensure desirable outcomes of IVR for an engineering course. A learning model has been followed to use VR characteristics and learning affordances in teaching basic principles. Different game modes, considered as learning activities, are incorporated to benefit from experiential and spatial knowledge representation and to create a learning experience that fulfils intended learning outcomes (ILOs) (defined by CDIO Standard 2 and Bloom’s learning taxonomy) associated with the particular course moment. The evaluation of IVR laboratory highlights effectiveness of the approach in achieving ILOs provided that pedagogical models have been followed to create powerful modes of learning.

  • 80.
    Sundén, Erik
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kottravel, Sathish
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of Ulm, Germany.
    Multimodal volume illumination2015In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 50, p. 47-60Article in journal (Refereed)
    Abstract [en]

    Despite the increasing importance of multimodal volumetric data acquisition and the recent progress in advanced volume illumination, interactive multimodal volume illumination remains an open challenge. As a consequence, the perceptual benefits of advanced volume illumination algorithms cannot be exploited when visualizing multimodal data - a scenario where increased data complexity urges for improved spatial comprehension. The two main factors hindering the application of advanced volumetric illumination models to multimodal data sets are rendering complexity and memory consumption. Solving the volume rendering integral by considering multimodal illumination increases the sampling complexity. At the same time, the increased storage requirements of multimodal data sets forbid to exploit precomputation results, which are often facilitated by advanced volume illumination algorithms to reduce the amount of per-frame computations. In this paper, we propose an interactive volume rendering approach that supports advanced illumination when visualizing multimodal volumetric data sets. The presented approach has been developed with the goal to simplify and minimize per-sample operations, while at the same time reducing the memory requirements. We will show how to exploit illumination-importance metrics, to compress and transform multimodal data sets into an illumination-aware representation, which is accessed during rendering through a novel light-space-based volume rendering algorithm. Both, data transformation and rendering algorithm, are closely intervened by taking compression errors into account during rendering. We describe and analyze the presented approach in detail, and apply it to real-world multimodal data sets from biology, medicine, meteorology and engineering.

  • 81.
    Sundén, Erik
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    University of Ulm, Germany.
    Efficient Volume Illumination with Multiple Light Sources through Selective Light Updates2015In: 2015 IEEE Pacific Visualization Symposium (PacificVis), IEEE , 2015, p. 231-238Conference paper (Refereed)
    Abstract [en]

    Incorporating volumetric illumination into rendering of volumetric data increases visual realism, which can lead to improved spatial comprehension. It is known that spatial comprehension can be further improved by incorporating multiple light sources. However, many volumetric illumination algorithms have severe drawbacks when dealing with multiple light sources. These drawbacks are mainly high performance penalties and memory usage, which can be tackled with specialized data structures or data under sampling. In contrast, in this paper we present a method which enables volumetric illumination with multiple light sources without requiring precomputation or impacting visual quality. To achieve this goal, we introduce selective light updates which minimize the required computations when light settings are changed. We will discuss and analyze the novel concepts underlying selective light updates, and demonstrate them when applied to real-world data under different light settings.

  • 82.
    Ubillis, Amaru
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Evaluation of Sprite Kit for iOS game development2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose with this thesis is to investigate whether Sprite Kit is a good tool to simplify the development process for game developers when making 2D games for mobile devices. To answer this question a simple turn based strategy game has been developed with Sprite Kit. Sprite Kit is a game engine for making 2D games released by Apple.

    Based on the experience I got during the development I will go through and discuss some of the most important tools provided by the game engine and how they helped us to complete our game.

    The conclusions I reached after making a game with Sprite Kit is that the frame- work provides all the tools necessary for creating a simple 2D mobile game for iOS. Sprite Kit hides much of the lower level details and gives the game de- veloper comprehensive development support. This helps the game developer to save a lot of time and focus more on the gameplay when creating a game. 

  • 83.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology.
    An optical system for single image environment maps2007In: SIGGRAPH '07 ACM SIGGRAPH 2007 posters, ACM Press, 2007Conference paper (Refereed)
    Abstract [en]

    We present an optical setup for capturing a full 360° environment map in a single image snapshot. The setup, which can be used with any camera device, consists of a curved mirror swept around a negative lens, and is suitable for capturing environment maps and light probes. The setup achieves good sampling density and uniformity for all directions in the environment.

  • 84.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR, Germany.
    Kaiser, Gunnar
    SpheronVR, Germany.
    Next Generation Image Based Lighting using HDR Video2011In: Proceeding SIGGRAPH '11 ACM SIGGRAPH 2011 Talks, ACM Special Interest Group on Computer Science Education, 2011, p. article no 60-Conference paper (Refereed)
    Abstract [en]

    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:

    1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.

    2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.

    3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure.

    4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.

    We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

  • 85.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Spatially varying image based lighting using HDR-video2013In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed)
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

  • 86.
    Willfahrt, Andreas
    et al.
    Institute for Applied Research, Media University Stuttgart, Stuttgart, Germany .
    Hübner, Gunter
    Institute for Applied Research, Media University Stuttgart, Stuttgart, Germany .
    Optimization of aperture size and distance in the insulating mask of a five layer vertical stack forming a fully printed thermoelectric generator2011In: Advances in Printing and Media Technology Proceedings of the 38th International Research Conference of iarigai / [ed] Nils Enlund and Mladen Lovreček, International Association of Research Organizations for the Information, Media and Graphic Arts Industries , 2011, Vol. 38, p. 261-269Conference paper (Refereed)
    Abstract [en]

    Printed thermoelectric generators (TEG) combine the advantages of screen printing with the uncomplicated assembly and reliability of thermoelectric devices. Successively printed layers on top of each other are needed for a completed device in a vertical stack setup. One of the challenging layers is the insulating mask which provides cavities for the thermoelectric legs. By governing the thickness of this insulating mask the overall thickness of the TEG is determined, too. The spatial separation is a necessity for reasonable energy conversion efficiency.

  • 87.
    Willfahrt, Andreas
    et al.
    Stuttgart Media University, Hochschule der Medien (HdM), Stuttgart, Germany.
    Steiner, Erich
    Stuttgart Media University, Hochschule der Medien (HdM), Stuttgart, Germany.
    Model for calculation of design and electrical parameters of thermoelectric generators2012In: Journal of Print and Media Technology Research, ISSN 2223-8905, Vol. 1, no 4, p. 247-257Article in journal (Refereed)
    Abstract [en]

    Energy harvesting - the conversion of ambient energy into electrical energy - is a frequently used term nowadays. Several conversion principles are available, e.g., photovoltaics, wind power and water power. Lesser-known are thermoelectric generators (TEG) although they were already studied actively during and after the world wars in the 20th century (Caltech Material Science, n. d.). In this work, the authors present a mathematical model for the calculation of input or output parameters of printed thermoelectric generators. The model is strongly related to existing models (Freunek et al., 2009; Rowe, 1995; Glatz et al., 2006) for conventionally produced TEGs as well as for printed TEGs. Thermal effects as investtigated by Freunek et al. (2009; 2010) could be included. In order to demonstrate the benefit of the model, two examples of calculations are presented. The parameters of the materials are derived from existing printing inks reported elsewhere (Chen et al., 2011; Wuesten and Potje-Kamloth, 2008; Zhang et al., 2010; Liu et al., 2011; Bubnova et al., 2011). The printing settings are chosen based on feasibility and convenience.

  • 88.
    Yang, Li
    et al.
    Karlstad University.
    Gooran, Sasan
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Eriksen, Magnus
    Johansson, Tobias
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Color Based Maximal GCR for Electrophotography2006In: IS&T Int. Conf. on Digital Printing Technologies (NIP22), The Society for Imaging Science and Technology , 2006, p. 394-397Conference paper (Other academic)
    Abstract [en]

    The underline idea of grey component replacement (GCR) is to replace a mixture of primary colors (cyan, magenta, and yellow) by a black. Current algorithms of GCR are mainly based on the concept of equal-tone-value-reduction or mixing equal amount (tone value) of primary colors generating gray, which in turn can be represented by the same amount of black. As the colors used are usually non-ideal, such a replacement can result in remarkable color deviation.    

    We proposed an algorithm of maximal GCR based on color matching, i.e. the black is introduced in a way that preserves the color (before and after GCR). In the algorithm, the primary with smallest tonal value is set to be zero (tone value) while the other two are reduced according to the color matching calculations. To achieve a real color matching of print, dot gain effects have been considered in the calculation. The proposed algorithm has been tested successfully for FM halftoning using an electrophotographic printer.   

  • 89.
    Yang, Li
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Kruse, Björn
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Yule-Nielsen Effect and Ink-penetration in Multi-chromatic Tone Reproduction2000In: IS & T's NIP16: International Conference on Digital Printing Technologies, 2000, p. 363-366Conference paper (Other academic)
    Abstract [en]

    A framework describing influences of ink penetration and Yule-Nielsen effect on the reflectance and tristimulus values of a halftone sample has been proposed. General expressions of the reflectance values and CIEXYZ tristimulus values have been derived. Simulations for images printed with two inks have been carried out by applying Gaussian type of point spread function (PSF). Dependence of Yule-Nielsen effect on the optical properties of substrate, inks, the dot geometry, ink penetration etc., have been discussed.

  • 90.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Multilevel Halftoning and Color Separation for Eight-Channel Printing2016In: Journal of Imaging Science and Technology, ISSN 1062-3701, E-ISSN 1943-3522, Vol. 60, no 5, article id 50403Article in journal (Refereed)
    Abstract [en]

    Multichannel printing employs additional colorants to achieve higher quality reproduction, assuming their physical overlap restrictions are met. These restrictions are commonly overcome in the printing workflow by controlling the colorant choice at each point. Our multilevel halftoning algorithm bundles inks of same hues in one channel with no overlap, separating them into eight channels, consequentially benefitting of increased ink options at each point. In this article, implementation and analysis of the algorithm is carried out. Color separation is performed using the cellular Yule‐Nielsen modified spectral Neugebauer model. The channels are binarized with the multilevel halftoning algorithm. The workflow is evaluated with an eight-channel inkjet at 600 dpi resulting in mean and maximum ΔE 94 color differences around 1 and 2, respectively. The halftoning algorithm is analyzed using S-CIELAB, thus involving the human visual system, in which multilevel halftoning showed improvement in terms of image quality compared to the conventional approach.

  • 91.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Multilevel halftoning applied to achromatic inks in multi-channel printing2014In: Abstracts from 41st International research conference of iarigai: Advances in Printing and Media Technology,  Print and media research for the benefit of industry and society, 2014, p. 25-25Conference paper (Other academic)
    Abstract [en]

    Printing using more than four ink channels visually improves the reproduction. Nevertheless, if the ink layer thickness at any given point exceeds a certain limit, ink bleeding and colour accuracy problems would occur. Halftoning algorithms that process channels dependently are one way of dealing with this shortcoming of multi-channel printing. A multilevel halftoning algorithm that processes a channel so that it is printed with multiple inks of same chromatic value was introduced in our research group. Here we implement this multilevel algorithm using three achromatic inks – photo grey, grey, black – in a real paper-ink setup. The challenges lay in determining the thresholds for ink separation and in dot gain compensation. Dot gain results in a darker reproduction and since it originates from the interaction between a specific ink and paper, compensating the original image for multilevel halftone means expressing dot gain of three inks in terms of the nominal coverage of a single ink. Results prove a successful multilevel halftone implementation workflow using multiple inks while avoiding dot-on-dot placement and accounting for dot gain. Results show the multilevel halftoned image is visually improved in terms of graininess and detail enhancement when compared to the bi-level halftoned image.

  • 92.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    MULTILEVEL HALFTONING AS AN ALGORITHM TO CONTROL INK OVERLAP IN MULTI-CHANNEL PRINTING2015In: 2015 COLOUR AND VISUAL COMPUTING SYMPOSIUM (CVCS), IEEE , 2015Conference paper (Refereed)
    Abstract [en]

    A multilevel halftoning algorithm can be used to overcome some of the challenges of multi-channel printing. In this algorithm, each channel is processed so that it can be printed using multiple inks of approximately the same hue, achieving a single ink layer. The computation of the threshold values required for ink separation and dot gain compensation pose an interesting challenge. Since the dot gain depends on the specific combination of ink, paper and print resolution, compensating the original image for multilevel halftoning means expressing the dot gain of multiple inks of same hue in terms of the coverage of a single ink. The applicability of the proposed multilevel halftoning workflow is demonstrated using chromatic inks while avoiding dot overlap and accounting for dot gain. The results indicate that the multilevel halftoned image is visually improved in terms of graininess when compared to bi-level halftoned images.

  • 93.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Multi-channel printing by orthogonal and non-orthogonal AM halftoning2013In: Proceedings of 12th International AIC Colour Congress: Bringing Colour to Life, Newcastle, UK, 2013Conference paper (Refereed)
    Abstract [en]

    Multi-channel printing with more than the conventional four colorants brings numerous advantages, but also challenges, like implementation of halftone algorithms. This paper concentrates on amplitude modulated (AM) halftoning for multi-channel printing. One difficulty is the correct channel rotation to avoid the moiré effect and to achieve colour fidelity in case of misregistration. 20 test patches were converted to seven-channel images and AM halftoning was applied using two different approaches in order to obtain a moiré-free impression. One method was to use orthogonal screens and adjust the channels by overlapping the pairs of complimentary colours, while the second was to implement non-orthogonal halftone screens (ellipses). By doing so, a wider angle range is available to accommodate a seven-channel impression. The performance was evaluated by simulating misregistration in both position and angle for a total of 1600 different scenarions. ΔE values were calculated between the misregistered patches and the correct ones, for both orthogonal and non-orthogonal screens. Results show no visible morié and improvement in colour fidelity when using non-orthogonal screens for seven-channel printing, producing smaller colour differences in case of misregistration.

  • 94.
    Žitinski Elías, Paula
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Improving image quality in multi-channel printing - multilevel halftoning, color separation and graininess characterization2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Color printing is traditionally achieved by separating an input image into four channels (CMYK) and binarizing them using halftoning algorithms, in order to designate the locations of ink droplet placement. Multi-channel printing means a reproduction that employs additional inks other than these four in order to augment the color gamut (scope of reproducible colors) and reduce undesirable ink droplet visibility, so-called graininess.

    One aim of this dissertation has been to characterize a print setup in which both the primary inks CMYK and their light versions are used. The presented approach groups the inks, forming subsets, each representing a channel that is reproduced with multiple inks. To halftone the separated channels in the present methodology, a specific multilevel halftoning algorithm is employed, halftoning each channel to multiple levels. This algorithm performs the binarization from the ink subsets to each separate colorant. Consequently, the print characterization complexity remains unaltered when employing the light inks, avoiding the normal increase in computational complexity, the one-to-many mapping problem and the increase in the number of training samples. The results show that the reproduction is visually improved in terms of graininess and detail enhancement.

    The secondary color inks RGB are added in multi-channel printing to increase the color gamut. Utilizing them, however, potentially increases the perceived graininess. Moreover, employing the primary, secondary and light inks means a color separation from a three-channel CIELAB space into a multi-channel colorant space, resulting in colorimetric redundancy in which multiple ink combinations can reproduce the same target color. To address this, a proposed cost function is incorporated in the color separation approach, weighting selected factors that influence the reproduced image quality, i.e. graininess and color accuracy, in order to select the optimal ink combination. The perceived graininess is modeled by employing S-CIELAB, a spatial low-pass filtering mimicking the human visual system. By applying the filtering to a large dataset, a generalized prediction that quantifies the perceived graininess is carried out and incorporated as a criterion in the color separation.

    Consequently, the presented research increases the understanding of color reproduction and image quality in multi-channel printing, provides concrete solutions to challenges in the practical implementation, and rises the possibilities to fully utilize the potential in multi-channel printing for superior image quality.

12 51 - 94 of 94
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf