liu.seSearch for publications in DiVA
Change search
Refine search result
123 101 - 147 of 147
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101. Order onlineBuy this publication >>
    Muthumanickam, Prithiviraj
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Data Abstraction and Pattern Identification in Time-series Data2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Data sources such as simulations, sensor networks across many application domains generate large volumes of time-series data which exhibit characteristics that evolve over time. Visual data analysis methods can help us in exploring and understanding the underlying patterns present in time-series data but, due to their ever-increasing size, the visual data analysis process can become complex. Large data sets can be handled using data abstraction techniques by transforming the raw data into a simpler format while, at the same time, preserving significant features that are important for the user. When dealing with time-series data, abstraction techniques should also take into account the underlying temporal characteristics.  

    This thesis focuses on different data abstraction and pattern identification methods particularly in the cases of large 1D time-series and 2D spatio-temporal time-series data which exhibit spatiotemporal discontinuity. Based on the dimensionality and characteristics of the data, this thesis proposes a variety of efficient data-adaptive and user-controlled data abstraction methods that transform the raw data into a symbol sequence. The transformation of raw time-series into a symbol sequence can act as input to different sequence analysis methods from data mining and machine learning communities to identify interesting patterns of user behavior.  

    In the case of very long duration 1D time-series, locally adaptive and user-controlled data approximation methods were presented to simplify the data, while at the same time retaining the perceptually important features. The simplified data were converted into a symbol sequence and a sketch-based pattern identification was then used to identify patterns in the symbolic data using regular expression based pattern matching. The method was applied to financial time-series and patterns such as head-and-shoulders, double and triple-top patterns were identified using hand drawn sketches in an interactive manner. Through data smoothing, the data approximation step also enables visualization of inherent patterns in the time-series representation while at the same time retaining perceptually important points.  

    Very long duration 2D spatio-temporal eye tracking data sets that exhibit spatio-temporal discontinuity was transformed into symbolic data using scalable clustering and hierarchical cluster merging processes, each of which can be parallelized. The raw data is transformed into a symbol sequence with each symbol representing a region of interest in the eye gaze data. The identified regions of interest can also be displayed in a Space-Time Cube (STC) that captures both the temporal and contextual information. Through interactive filtering, zooming and geometric transformation, the STC representation along with linked views enables interactive data exploration. Using different sequence analysis methods, the symbol sequences are analyzed further to identify temporal patterns in the data set. Data collected from air traffic control officers from the domain of Air traffic control were used as application examples to demonstrate the results.

    List of papers
    1. Shape Grammar Extraction for Efficient Query-by-Sketch Pattern Matching in Long Time Series
    Open this publication in new window or tab >>Shape Grammar Extraction for Efficient Query-by-Sketch Pattern Matching in Long Time Series
    2016 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Long time-series, involving thousands or even millions of time steps, are common in many application domains but remain very difficult to explore interactively. Often the analytical task in such data is to identify specific patterns, but this is a very complex and computationally difficult problem and so focusing the search in order to only identify interesting patterns is a common solution. We propose an efficient method for exploring user-sketched patterns, incorporating the domain expert’s knowledge, in time series data through a shape grammar based approach. The shape grammar is extracted from the time series by considering the data as a combination of basic elementary shapes positioned across different am- plitudes. We represent these basic shapes using a ratio value, perform binning on ratio values and apply a symbolic approximation. Our proposed method for pattern matching is amplitude-, scale- and translation-invariant and, since the pattern search and pattern con- straint relaxation happen at the symbolic level, is very efficient permitting its use in a real-time/online system. We demonstrate the effectiveness of our method in a case study on stock market data although it is applicable to any numeric time series data.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2016. p. 10
    Series
    IEEE Conference on Visual Analytics Science and Technology, ISSN 2325-9442
    Keywords
    User-queries, Sketching, Time Series, Symbolic ap-proximation, Regular Expression, Shape Grammar
    National Category
    Engineering and Technology Computer Sciences Computer Systems Computer Vision and Robotics (Autonomous Systems) Bioinformatics (Computational Biology)
    Identifiers
    urn:nbn:se:liu:diva-134334 (URN)10.1109/VAST.2016.7883518 (DOI)000402056500013 ()978-1-5090-5661-3 (ISBN)
    Conference
    2016 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), October 23-28, Baltimore, USA
    Funder
    Swedish Research Council, 2013-4939
    Available from: 2017-02-03 Created: 2017-02-03 Last updated: 2019-11-25Bibliographically approved
    2. Supporting Exploration of Eye Tracking Data: Identifying Changing Behaviour Over Long Durations
    Open this publication in new window or tab >>Supporting Exploration of Eye Tracking Data: Identifying Changing Behaviour Over Long Durations
    Show others...
    2016 (English)In: BEYOND TIME AND ERRORS: NOVEL EVALUATION METHODS FOR VISUALIZATION, BELIV 2016, ASSOC COMPUTING MACHINERY , 2016, p. 70-77Conference paper, Published paper (Refereed)
    Abstract [en]

    Visual analytics of eye tracking data is a common tool for evaluation studies across diverse fields. In this position paper we propose a novel user-driven interactive data exploration tool for understanding the characteristics of eye gaze movements and the changes in these behaviours over time. Eye tracking experiments generate multidimensional scan path data with sequential information. Many mathematical methods in the past have analysed one or a few of the attributes of the scan path data and derived attributes such as Area of Interest (AoI), statistical measures, geometry, domain specific features etc. In our work we are interested in visual analytics of one of the derived attributes of sequential data-the: AoI and the sequences of visits to these AoIs over time. In the case of static stimuli, such as images, or dynamic stimuli, like videos, having predefined or fixed AoIs is not an efficient way of analysing scan path patterns. The AoI of a user over a stimulus may evolve over time and hence determining the AoIs dynamically through temporal clustering could be a better method for analysing the eye gaze patterns. In this work we primarily focus on the challenges in analysis and visualization of the temporal evolution of AoIs. This paper discusses the existing methods, their shortcomings and scope for improvement by adopting visual analytics methods for event-based temporal data to the analysis of eye tracking data.

    Place, publisher, year, edition, pages
    ASSOC COMPUTING MACHINERY, 2016
    Keywords
    Eye tracking; pattern analysis; scan path; time evolving AoIs; Clustering of Fixations; ActiviTree
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-133129 (URN)10.1145/2993901.2993905 (DOI)000387865100009 ()978-1-4503-4818-8 (ISBN)
    Conference
    6th Bi-Annual Workshop (BELIV)
    Available from: 2016-12-12 Created: 2016-12-09 Last updated: 2019-11-25
    3. Identification of Temporally Varying Areas of Interest in Long-Duration Eye-Tracking Data Sets
    Open this publication in new window or tab >>Identification of Temporally Varying Areas of Interest in Long-Duration Eye-Tracking Data Sets
    Show others...
    2019 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, p. 87-97Article in journal (Refereed) Published
    Abstract [en]

    Eye-tracking has become an invaluable tool for the analysis of working practices in many technological fields of activity. Typically studies focus on short tasks and use static expected areas of interest (AoI) in the display to explore subjects’ behaviour, making the analyst’s task quite straightforward. In long-duration studies, where the observations may last several hours over a complete work session, the AoIs may change over time in response to altering workload, emergencies or other variables making the analysis more difficult. This work puts forward a novel method to automatically identify spatial AoIs changing over time through a combination of clustering and cluster merging in the temporal domain. A visual analysis system based on the proposed methods is also presented. Finally, we illustrate our approach within the domain of air traffic control, a complex task sensitive to prevailing conditions over long durations, though it is applicable to other domains such as monitoring of complex systems. 

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2019
    Keywords
    Eye-tracking data, areas of interest, clustering, minimum spanning tree, temporal data, spatio-temporal data
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-152714 (URN)10.1109/TVCG.2018.2865042 (DOI)000452640000009 ()30183636 (PubMedID)2-s2.0-85052788669 (Scopus ID)
    Note

    Funding agencies: Swedish Research Council [2013-4939]; RESKILL project - Swedish Transport Administration; Swedish Maritime Administration; Swedish Air Navigation Service Provider LFV

    Available from: 2018-11-16 Created: 2018-11-16 Last updated: 2019-11-25Bibliographically approved
    4. Analysis of Long Duration Eye-Tracking Experiments in a Remote Tower Environment
    Open this publication in new window or tab >>Analysis of Long Duration Eye-Tracking Experiments in a Remote Tower Environment
    Show others...
    2019 (English)In: 13th USA/Europe Air Traffic Management Research and Development Seminar 2019: Proceedings of a meeting held 17-21 June 2019, Vienna, Austria., EUROCONTROL , 2019Conference paper, Published paper (Refereed)
    Abstract [en]

    Eye-Tracking experiments have proven to be of great assistance in understanding human computer interaction across many fields. Most eye-tracking experiments are non-intrusive and so do not affect the behaviour of the subject. Such experiments usually last for just a few minutes and so the spatio- temporal data generated by the eye-tracker is quite easy to analyze using simple visualization techniques such as heat maps and animation. Eye tracking experiments in air traffic control, or maritime or driving simulators can, however, last for several hours and the analysis of such long duration data becomes much more complex. We have developed an analysis pipeline, where we identify visual spatial areas of attention over a user interface using clustering and hierarchical cluster merging techniques. We have tested this technique on eye tracking datasets generated by air traffic controllers working with Swedish air navigation services, where each eye tracking experiment lasted for ∼90 minutes. We found that our method is interactive and effective in identification of interesting patterns of visual attention that would have been very difficult to locate using manual analysis.

    Place, publisher, year, edition, pages
    EUROCONTROL, 2019
    Keywords
    Remote tower, Eye tracking, Spatio-temporal clustering
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-160959 (URN)2-s2.0-85084023193 (Scopus ID)9781510893504 (ISBN)
    Conference
    Thirteenth USA/Europe Air Traffic Management Research and Development Seminar (ATM2019), Vienna, Austria, June 17-21, 2019
    Funder
    Swedish Transport AdministrationSwedish Research Council
    Available from: 2019-10-16 Created: 2019-10-16 Last updated: 2021-09-29Bibliographically approved
    5. Comparison of Attention Behaviour Across User Sets through Automatic Identification of Common Areas of Interest
    Open this publication in new window or tab >>Comparison of Attention Behaviour Across User Sets through Automatic Identification of Common Areas of Interest
    Show others...
    2019 (English)In: Hawaii International Conference on System Sciences 2020, 2019Conference paper, Published paper (Refereed)
    Abstract [en]

    Eye tracking is used to analyze and compare user behaviour within numerous domains, but long duration eye tracking experiments across multiple users generate millions of eye gaze samples, making the data analysis process complex. Usually the samples are labelled into Areas of Interest (AoI) or Objects of Interest (OoI), where the AoI approach aims to understand how a user monitors different regions of a scene while OoI identification uncovers distinct objects in the scene that attract user attention. Using scalable clustering and cluster merging techniques that require minimal user input, we label AoIs across multiple users in long duration eye tracking experiments. Using the common AoI labels then allows direct comparison of the users as well as the use of such methods as Hidden Markov Models and Sequence mining to uncover common and distinct behaviour between the users which, until now, has been prohibitively difficult to achieve.

    Series
    Proceedings of the Annual Hawaii International Conference on System Sciences (HICSS), ISSN 1530-1605, E-ISSN 2572-6862
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-161999 (URN)10.24251/HICSS.2020.167 (DOI)978-0-9981331-3-3 (ISBN)
    Conference
    Hawaii International Conference on System Sciences
    Available from: 2019-11-15 Created: 2019-11-15 Last updated: 2021-09-30
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 102.
    Muthumanickam, Prithiviraj
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Forsell, Camilla
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Vrotsou, Katerina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Johansson, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Supporting Exploration of Eye Tracking Data: Identifying Changing Behaviour Over Long Durations2016In: BEYOND TIME AND ERRORS: NOVEL EVALUATION METHODS FOR VISUALIZATION, BELIV 2016, ASSOC COMPUTING MACHINERY , 2016, p. 70-77Conference paper (Refereed)
    Abstract [en]

    Visual analytics of eye tracking data is a common tool for evaluation studies across diverse fields. In this position paper we propose a novel user-driven interactive data exploration tool for understanding the characteristics of eye gaze movements and the changes in these behaviours over time. Eye tracking experiments generate multidimensional scan path data with sequential information. Many mathematical methods in the past have analysed one or a few of the attributes of the scan path data and derived attributes such as Area of Interest (AoI), statistical measures, geometry, domain specific features etc. In our work we are interested in visual analytics of one of the derived attributes of sequential data-the: AoI and the sequences of visits to these AoIs over time. In the case of static stimuli, such as images, or dynamic stimuli, like videos, having predefined or fixed AoIs is not an efficient way of analysing scan path patterns. The AoI of a user over a stimulus may evolve over time and hence determining the AoIs dynamically through temporal clustering could be a better method for analysing the eye gaze patterns. In this work we primarily focus on the challenges in analysis and visualization of the temporal evolution of AoIs. This paper discusses the existing methods, their shortcomings and scope for improvement by adopting visual analytics methods for event-based temporal data to the analysis of eye tracking data.

  • 103.
    Muthumanickam, Prithiviraj
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nordman, Aida
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Meyer, Lothar
    LFV, Sweden.
    Boonsong, Supathida
    LFV, Sweden.
    Lundberg, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Cooper, Matthew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Analysis of Long Duration Eye-Tracking Experiments in a Remote Tower Environment2019In: 13th USA/Europe Air Traffic Management Research and Development Seminar 2019: Proceedings of a meeting held 17-21 June 2019, Vienna, Austria., EUROCONTROL , 2019Conference paper (Refereed)
    Abstract [en]

    Eye-Tracking experiments have proven to be of great assistance in understanding human computer interaction across many fields. Most eye-tracking experiments are non-intrusive and so do not affect the behaviour of the subject. Such experiments usually last for just a few minutes and so the spatio- temporal data generated by the eye-tracker is quite easy to analyze using simple visualization techniques such as heat maps and animation. Eye tracking experiments in air traffic control, or maritime or driving simulators can, however, last for several hours and the analysis of such long duration data becomes much more complex. We have developed an analysis pipeline, where we identify visual spatial areas of attention over a user interface using clustering and hierarchical cluster merging techniques. We have tested this technique on eye tracking datasets generated by air traffic controllers working with Swedish air navigation services, where each eye tracking experiment lasted for ∼90 minutes. We found that our method is interactive and effective in identification of interesting patterns of visual attention that would have been very difficult to locate using manual analysis.

  • 104.
    Namedanian, Mahziar
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Coppel, Ludovic
    Neuman, Magnus
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Edström, Per
    Kolseth, Petter
    Analysis of Optical and Physical Dot Gain by Microscale Image Histogram and Modulation Transfer Functions2013In: Journal of Imaging Science and Technology, ISSN 1062-3701, E-ISSN 1943-3522, Vol. 57, no 2, p. 20504-1-20504-5Article in journal (Refereed)
    Abstract [en]

    The color of a print is affected by ink spreading and lateral light scattering in the substrate, making printed dots appear larger. Characterization of physical and optical dot gain is crucial for the graphic arts and paper industries. We propose a novel approach to separate physical from optical dot gain by use of a high-resolution camera. This approach is based on the histogram of microscale images captured by the camera. Having determined the actual physical dot shape, we estimate the modulation transfer function (MTF) of the paper substrate. The proposed method is validated by comparing the estimated MTF of 11 offset printed coated papers to the MTF obtained from the unprinted papers using measured and Monte Carlo simulated edge responses.

  • 105.
    Namedanian, Mahziar
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Optical Dot Gain Study on Different Halftone Dot Shapes2013Conference paper (Refereed)
  • 106.
    Nguyen, Hoai-Nam
    et al.
    Inria Ctr Rech Rennes Bretagne Atlantique, France.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Inria Rennes Bretagne Atlantique, France.
    Guillemot, Christine
    Inria Ctr Rech Rennes Bretagne Atlantique, France.
    Multi-Mask Camera Model for Compressed Acquisition of Light Fields2021In: IEEE Transactions on Computational Imaging, ISSN 2573-0436, E-ISSN 2333-9403, Vol. 7, p. 191-208Article in journal (Refereed)
    Abstract [en]

    We present an all-in-one camera model that encompasses the architectures of most existing compressive-sensing light-field cameras, equipped with a single lens and multiple amplitude coded masks that can be placed at different positions between the lens and the sensor. The proposed model, named the equivalent multi-mask camera (EMMC) model, enables the comparison between different camera designs, e.g using monochrome or CFA-based sensors, single or multiple acquisitions, or varying pixel sizes, via a simple adaptation of the sampling operator. In particular, in the case of a camera equipped with a CFA-based sensor and a coded mask, this model allows us to jointly perform color demosaicing and light field spatio-angular reconstruction. In the case of variable pixel size, it allows to perform spatial super-resolution in addition to angular reconstruction. While the EMMC model is generic and can be used with any reconstruction algorithm, we validate the proposed model with a dictionary-based reconstruction algorithm and a regularization-based reconstruction algorithm using a 4D Total-Variation-based regularizer for light field data. Experimental results with different reconstruction algorithms show that the proposed model can flexibly adapt to various sensing schemes. They also show the advantage of using an in-built CFA sensor with respect to monochrome sensors classically used in the literature.

  • 107.
    Nguyen, Phong Hai
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Statistical flow data applied to visual analytics2011Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    Statistical flow data such as commuting, migration, trade and money flows has gained manyinterests from policy makers, city planners, researchers and ordinary citizens as well. Therehave appeared numerous statistical data visualisations; however, there is a shortage of applicationsfor visualising flow data. Moreover, among these rare applications, some are standaloneand only for expert usages, some do not support interactive functionalities, and somecan only provide an overview of data. Therefore, in this thesis, I develop a web-enabled,highly interactive and analysis support statistical flow data visualisation application that addressesall those challenges.My application is implemented based on GAV Flash, a powerful interactive visualisationcomponent framework, thus it is inherently web-enabled with basic interactive features. Theapplication uses visual analytics approach that combines both data analysis and interactivevisualisation to solve cluttering issue, the problem of overlapping flows on the display. A varietyof analysis means are provided to analyse flow data efficiently including analysing bothflow directions simultaneously, visualising time-series flow data, finding most attracting regionsand figuring out the reason behind derived patterns. The application also supportssharing knowledge between colleagues by providing story-telling mechanism which allowsusers to create and share their findings as a visualisation story. Last but not least, the applicationenables users to embed the visualisation based on the story into an ordinary web-pageso that public stand a golden chance to derive an insight into officially statistical flow data.

    Download full text (pdf)
    fulltext
  • 108.
    Ohlsson, Tobias
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Carnstam, Albin
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    A business intelligence application for interactive budget processes2012Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today budgeting occurs in all types of organizations, from authorities and municipalities, to private companies and non-profit associations. Depending on whether the organization is large or small it can look very different. In large organizations the budget can be such a comprehensive document that it is difficult to keep track of it. Furthermore, in large organizations, the budget work starts very early. Thus, an effective budget process could reduce resources, time and ultimately costs.

    This master’s thesis report describes a budget application built with the Business Intelligence software QlikView. With the application a budgeter can load desired budget data and through a QlikView Extension Object edit the loaded data and finally follow up the work of different budgets. The Extension Object has been implemented using JavaScript and HTML to create a GUI. The edited data is sent to a back-end interface built with one web server and one database server.

    To evaluate the usability of the Extension Object’s GUI and determine how the budget application works and to get feedback on the Extension Object and its functionality, a user study was performed. The result of the user study shows that the application simplifies budget processes and has great potential to help budgeters and controllers to increase their effectiveness.

    Download full text (pdf)
    fulltext
  • 109. Order onlineBuy this publication >>
    Pranovich, Alina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Modelling appearance printing: Acquisition and digital reproduction of translucent and goniochromatic materials2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Colour perception is fundamental to our everyday experiences, allowing us to communicate and interpret visual information effectively. Yet, replicating these experiences accurately poses a significant challenge, particularly in the context of full-colour 3D printing. Advances in this field have revolutionised the fabrication of customised prosthetic body parts, such as eyes, teeth, and skin features, with profound implications for medical and aesthetic applications.

    The key to successful 3D printing lies in the digital preview of objects before fabrication, enabling users to assess colour reproduction and quality. However, accurately representing colour in a digital environment is complex, as it depends on numerous factors, including illumination, object shape, surface properties, scene context, and observer characteristics. Traditional methods of previewing conventional 2D prints overlook this complexity.

    This thesis addresses this challenge by focusing on two types of materials: semitransparent polymers commonly used in 3D printing, and goniochromatic colorants employed in printing to introduce unique effects unattainable with conventional inks for 2D printing. For semitransparent materials, we developed an empirical function to represent colour based on sample thickness, enabling efficient digital representation. Additionally, we adapted a colour measuring device to identify two key material parameters, absorption and scattering coefficients, essential for accurate colour reproduction.

    Goniochromatic materials, such as thin film-coated mica particles, are slightly more complicated and less predictive in terms of their final colour appearance. Although not yet used in 3D printing, these particles used in conventional printing introduce colour variation while rotating the print. We found that goniochromatic properties can be expressed with an empirically found function after collecting angle-dependent light reflecting properties of the sample. We used this function and showed how prints with goniochromatic materials can be efficiently previewed on a computer monitor.

    List of papers
    1. Surface Discretisation Effects on 3D Printed Surface Appearance
    Open this publication in new window or tab >>Surface Discretisation Effects on 3D Printed Surface Appearance
    2020 (English)In: CEUR-WS.org, 2020Conference paper, Published paper (Refereed)
    Abstract [en]

    The spatial resolution of 3D printing is finite. The necessary discretisation of an object before printing produces a step-like surface structure that influences the appearance of the printed objects. To study the effect of this discretisation on specular reflections, we print surfaces at various oblique angles. This enables us to observe the step-like struc- ture and its influence on reflected light. Based on the step-like surface structure, we develop a reflectance model describing the redistribution of the light scattered by the surface, and we study dispersion effects due to the wavelength dependency of the refractive index of the material. We include preliminary verification by comparing model predictions to photographs for different angles of observation.

    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-173601 (URN)
    Conference
    Colour and Visual Computing Symposium 2020 (CVCS 2020), Gjøvik, Norway, 16-17 September, 2020
    Funder
    EU, Horizon 2020, 814158
    Available from: 2021-02-25 Created: 2021-02-25 Last updated: 2024-04-05Bibliographically approved
    2. Optical properties and appearance of fused deposition modelling filaments
    Open this publication in new window or tab >>Optical properties and appearance of fused deposition modelling filaments
    2021 (English)In: Advances in Printing and Media Technology - Printing in the Digital Era : Proceedings of the 47th International Research Conference of iarigai, International Association of Research Organizations for the Information, Media and Graphic Arts Industries , 2021, Vol. 47, p. 134-140Conference paper, Published paper (Refereed)
    Abstract [en]

    The appearance of 3D-printed objects is affected by numerous parameters. Specifically, the colour of each point on the surface is affected not only by the applied material, but also by the neighbouring segments as well as by the structure underneath it. Translucency of the 3D printing inks is the key property needed for reproduction of surfaces resembling natural materials. However, the prediction of colour appearance of translucent materials within the print is a complex task that is of great interest. In this work, a method is proposed for studying the appearance of translucent 3D materials in terms of the surface colour. It is shown how the thickness of the printed flat samples as well as the background underneath affect the colour. By studying diffuse reflectance and transmittance of layers of different thicknesses, apparent, spectral optical properties were obtained, i.e., extinction and scattering coefficients, in the case of commercially available polylactic acid (PLA) filaments for Fused Deposition Modelling (FDM) printers. The coefficients were obtained by fitting a simplistic model to the measured diffuse reflectance as a function of layer thickness. The results were verified by reconstructing reflected spectra with the obtained parameters and comparing the estimated colour to spectrophotometer measurements. The resulting colour differences in terms of the CIEDE2000 standard are all below 2.

    Place, publisher, year, edition, pages
    International Association of Research Organizations for the Information, Media and Graphic Arts Industries, 2021
    Series
    Advances in Printing and Media Technology, ISSN 2409-4021
    Keywords
    3D printing, appearance, optical properties, PLA filaments, translucency
    National Category
    Textile, Rubber and Polymeric Materials
    Identifiers
    urn:nbn:se:liu:diva-181549 (URN)10.14622/Advances_47_2021 (DOI)978-3-948039-02-8 (ISBN)
    Conference
    47th IARIGAI International Conference on “Advances in Print and Media Technology”, 19/09/2021 → 24/09/2021
    Available from: 2021-12-01 Created: 2021-12-01 Last updated: 2024-04-05Bibliographically approved
    3. Angular dependent reflectance spectroscopy of RGBW pigments
    Open this publication in new window or tab >>Angular dependent reflectance spectroscopy of RGBW pigments
    Show others...
    2022 (English)Conference paper, Oral presentation with published abstract (Other academic)
    Abstract [en]

    Traditional printing relies primarily on subtractive color mixing techniques. In this case, optical color mixing is achieved by one of the established halftoning methods that use Cyan, Magenta, Yellow and Black (CMYK) primaries on a reflective white substrate. The reason behind the subtractive color mixing in printing is the high absorbance of available pigments used in inks. A new type of mica-based pigments that exhibit high reflectivity at Red, Green, Blue and White (RGBW) spectral bands was recently introduced by Merck (SpectravalTM). Printing with RGBW primaries on black background allows additive color mixing in prints. While offering excellent color depth, the reflected spectra of such pigments vary with the angles of incidence and observation. As a result, new approaches in modelling the appearance of prints as well as strategies for color separation and halftoning are needed. The prior optical characterization of the reflective inks is an essential first step. For this purpose, we have used SpectravalTM pigments to prepare acrylic based inks, which we applied on glass slides by screen printing. In this work, we measured the relative spectral bidirectional reflection distribution of Red, Green, Blue and White reflective inks. The measurements were conducted on an experimental set up consisting of a goniometer, spectrometer, and a xenon light source. Based on the measurements, we simulate the reflectance spectra under diffuse illumination and demonstrate ratios of red, green, and blue spectral components for different observation angles of individual inks and their combinations.

    Keywords
    RGB printing, BRDF, spectroscopy, special effect inks
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-189566 (URN)
    Conference
    48th Iarigai conference, Greenville SC, USA, Sept. 19-21 2022
    Available from: 2022-10-26 Created: 2022-10-26 Last updated: 2024-04-05Bibliographically approved
    4. Dot Off Dot Screen Printing with RGBW Reflective Inks
    Open this publication in new window or tab >>Dot Off Dot Screen Printing with RGBW Reflective Inks
    Show others...
    2023 (English)In: Journal of Imaging Science and Technology, ISSN 1062-3701, E-ISSN 1943-3522, Vol. 67, no 3, article id 030404Article in journal (Refereed) Published
    Abstract [en]

    Recent advances in pigment production resulted in the possibility to print with RGBW primaries instead of CMYK and performing additive color mixing in printing. The RGBW pigments studied in this work have the properties of structural colors, as the primary colors are a result of interference in a thin film coating of mica pigments. In this work, we investigate the angle-dependent gamut of RGBW primaries. We have elucidated optimal angles of illumination and observation for each primary ink and found the optimal angle of observation under diffuse illumination. We investigated dot off dot halftoned screen printing with RGBW inks on black paper and in terms of angle-dependent dot gain. Based on our observations, optimal viewing condition for the given RGBW inks is in a direction of around 30◦ to the surface normal. Here, the appearance of the resulting halftoned prints can be estimated well by Neugebauer formula (weighted averaging of the individual reflected spectra). Despite the negative physical dot gain during the dot off dot printing, we observe angularly dependent positive optical dot gain for halftoned prints. Application of interference RGBW pigments in 2.5D and 3D printing is not fully explored due to the technological limitations. In this work, we provide colorimetric data for efficient application of the angle-dependent properties of such pigments in practical applications.

    Place, publisher, year, edition, pages
    The Society for Imaging Science and Technology, 2023
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-198934 (URN)10.2352/J.ImagingSci.Technol.2023.67.3.030404 (DOI)001080972400007 ()2-s2.0-85164955722 (Scopus ID)
    Note

    Funding Agencies|Research Institute of Sweden

    Available from: 2023-11-03 Created: 2023-11-03 Last updated: 2024-04-05Bibliographically approved
    5. Printing with tonalli: Reproducing Featherwork from Precolonial Mexico Using Structural Colorants
    Open this publication in new window or tab >>Printing with tonalli: Reproducing Featherwork from Precolonial Mexico Using Structural Colorants
    Show others...
    2023 (English)In: Colorants, ISSN 2079-6447, Vol. 2, no 4, p. 632-653Article in journal (Refereed) Published
    Abstract [en]

    Two of the most significant cases of extant 16th-century featherwork from Mexico are the so-called Moctezuma’s headdress and the Ahuizotl shield. While the feathers used in these artworks exhibit lightfast colors, their assembly comprises mainly organic materials, which makes them extremely fragile. Printed media, including books, catalogs, educational materials, and fine copies, offer an accessible means for audiences to document and disseminate visual aspects of delicate cultural artifacts without risking their integrity. Nevertheless, the singular brightness and iridescent colors of feathers are difficult to communicate to the viewer in printed reproductions when traditional pigments are used. This research explores the use of effect pigments (multilayered reflective structures) and improved halftoning techniques for additive printing, with the objective of enhancing the reproduction of featherwork by capturing its changing color and improving texture representation via a screen printing process. The reproduced images of featherwork exhibit significant perceptual resemblances to the originals, primarily owing to the shared presence of structural coloration. We applied structure-aware halftoning to better represent the textural qualities of feathers without compromising the performance of effect pigments in the screen printing method. Our prints show angle-dependent color, although their gamut is reduced. The novelty of this work lies in the refinement of techniques for printing full-color images by additive printing, which can enhance the 2D representation of the appearance of culturally significant artifacts.

    Keywords
    Featherwork, Iridescence, Structural color, Screen printing, Effect pigments
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-202037 (URN)10.3390/colorants2040033 (DOI)
    Available from: 2024-04-05 Created: 2024-04-05 Last updated: 2024-04-05Bibliographically approved
    6. Iridescence Mimicking in Fabrics: A Ultraviolet/Visible Spectroscopy Study
    Open this publication in new window or tab >>Iridescence Mimicking in Fabrics: A Ultraviolet/Visible Spectroscopy Study
    Show others...
    2024 (English)In: BIOMIMETICS, ISSN 2313-7673, Vol. 9, no 2, article id 71Article in journal (Refereed) Published
    Abstract [en]

    Poly(styrene-methyl methacrylate-acrylic acid) photonic crystals (PCs), with five different sizes (170, 190, 210, 230 and 250 nm), were applied onto three plain fabrics, namely polyamide, polyester and cotton. The PC-coated fabrics were analyzed using scanning electronic microscopy and two UV/Vis reflectance spectrophotometric techniques (integrating sphere and scatterometry) to evaluate the PCs' self-assembly along with the obtained spectral and colors characteristics. Results showed that surface roughness of the fabrics had a major influence on the color produced by PCs. Polyamide-coated fabrics were the only samples having an iridescent effect, producing more vivid and brilliant colors than polyester and cotton samples. It was observed that as the angle of incident light increases, a hypsochromic shift in the reflection peak occurs along with the formation of new reflection peaks. Furthermore, color behavior simulations were performed with an illuminant A light source on polyamide samples. The illuminant A simulation showed greener and yellower structural colors than those illuminated with D50. The polyester and cotton samples were analyzed using scatterometry to check for iridescence, which was unseen upon ocular inspection and then proven to be present in these samples. This work allowed a better comprehension of how structural colors and their iridescence are affected by the textile substrate morphology and fiber type.

    Place, publisher, year, edition, pages
    MDPI, 2024
    Keywords
    photonic crystals; structural coloration; iridescent effect; textiles; UV/Vis reflectance; IP-BRDF
    National Category
    Materials Chemistry
    Identifiers
    urn:nbn:se:liu:diva-201479 (URN)10.3390/biomimetics9020071 (DOI)001170222500001 ()38392117 (PubMedID)
    Note

    Funding Agencies|FEDER [UID/CTM/00264/2020]; National Funds through Fundao para a Cincia e Tecnologia (FCT) [2022-03370]; Swedish Research Council [P2021-00040]; Swedish Energy Agency [.9220423]; Swedish Armed Forces research program AT [SFRH/BD/145269/2019]; FCT, MCTES, FSE

    Available from: 2024-03-12 Created: 2024-03-12 Last updated: 2024-04-05
    7. Empirical BRDF model for goniochromatic materials and soft proofing with reflective inks.
    Open this publication in new window or tab >>Empirical BRDF model for goniochromatic materials and soft proofing with reflective inks.
    Show others...
    2024 (English)In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756Article in journal (Refereed) Epub ahead of print
    Abstract [en]

    The commonly used analytic bidirectional reflectance distribution functions (BRDFs) do not model goniochromatism, that is, angle-dependent material color. The material color is usually defined by a diffuse reflectance spectrum or RGB vector and a specular part based on a spectral complex index of refraction. Extension of the commonly used BRDFs based on wave theory can help model goniochromatism, but this comes at the cost of significant added model complexity. We measured the goniochromatism of structual color pigments used for additive color printing and found that we can fit the observed spectral angular dependence of the bidirectional reflectance using a simple modification of the standard microfacet BRDF model. All we need to describe the goniochromatism is an empirically-based spectral parameter, which we use in our model together with a specular reflectance spectrum instead of the spectral complex index of refraction. We demonstrate the ability of our model to fit the measured reflectance of red, green, and blue commercial structural color pigments. Our BRDF model enables straightforward implementation of a shader for interactive preview of 3D objects with printed spatially and angularly varying texture.

    Keywords
    Printing, Pigments, Color, Ink, Image color analysis, Surface treatment, Optical surface waves
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-203285 (URN)10.1109/MCG.2024.3391376 (DOI)38640045 (PubMedID)
    Available from: 2024-05-06 Created: 2024-05-06 Last updated: 2024-05-06Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 110.
    Pranovich, Alina
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Frisvad, Jeppe Revall
    Technical University of Denmark, Kongens Lyngby, Denmark.
    Valyukh, Sergiy
    Linköping University, Department of Physics, Chemistry and Biology, Thin Film Physics. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Empirical BRDF model for goniochromatic materials and soft proofing with reflective inks.2024In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756Article in journal (Refereed)
    Abstract [en]

    The commonly used analytic bidirectional reflectance distribution functions (BRDFs) do not model goniochromatism, that is, angle-dependent material color. The material color is usually defined by a diffuse reflectance spectrum or RGB vector and a specular part based on a spectral complex index of refraction. Extension of the commonly used BRDFs based on wave theory can help model goniochromatism, but this comes at the cost of significant added model complexity. We measured the goniochromatism of structual color pigments used for additive color printing and found that we can fit the observed spectral angular dependence of the bidirectional reflectance using a simple modification of the standard microfacet BRDF model. All we need to describe the goniochromatism is an empirically-based spectral parameter, which we use in our model together with a specular reflectance spectrum instead of the spectral complex index of refraction. We demonstrate the ability of our model to fit the measured reflectance of red, green, and blue commercial structural color pigments. Our BRDF model enables straightforward implementation of a shader for interactive preview of 3D objects with printed spatially and angularly varying texture.

  • 111.
    Pranovich, Alina
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Frisvad, Jeppe Revall
    Technical University of Denmark, Kongens Lyngby, Denmark.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Surface Discretisation Effects on 3D Printed Surface Appearance2020In: CEUR-WS.org, 2020Conference paper (Refereed)
    Abstract [en]

    The spatial resolution of 3D printing is finite. The necessary discretisation of an object before printing produces a step-like surface structure that influences the appearance of the printed objects. To study the effect of this discretisation on specular reflections, we print surfaces at various oblique angles. This enables us to observe the step-like struc- ture and its influence on reflected light. Based on the step-like surface structure, we develop a reflectance model describing the redistribution of the light scattered by the surface, and we study dispersion effects due to the wavelength dependency of the refractive index of the material. We include preliminary verification by comparing model predictions to photographs for different angles of observation.

  • 112.
    Pranovich, Alina
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Trujillo Vazquez, Abigail
    University of the West of England, United Kingdom.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Valyukh, Sergiy
    Linköping University, Department of Physics, Chemistry and Biology, Thin Film Physics. Linköping University, Faculty of Science & Engineering.
    Frisvad, Jeppe Revall
    Technical University of Denmark, Denmark.
    Klein, Susanne
    University of the West of England, United Kingdom.
    Parraman, Carinna
    University of the West of England, United Kingdom.
    Angular dependent reflectance spectroscopy of RGBW pigments2022Conference paper (Other academic)
    Abstract [en]

    Traditional printing relies primarily on subtractive color mixing techniques. In this case, optical color mixing is achieved by one of the established halftoning methods that use Cyan, Magenta, Yellow and Black (CMYK) primaries on a reflective white substrate. The reason behind the subtractive color mixing in printing is the high absorbance of available pigments used in inks. A new type of mica-based pigments that exhibit high reflectivity at Red, Green, Blue and White (RGBW) spectral bands was recently introduced by Merck (SpectravalTM). Printing with RGBW primaries on black background allows additive color mixing in prints. While offering excellent color depth, the reflected spectra of such pigments vary with the angles of incidence and observation. As a result, new approaches in modelling the appearance of prints as well as strategies for color separation and halftoning are needed. The prior optical characterization of the reflective inks is an essential first step. For this purpose, we have used SpectravalTM pigments to prepare acrylic based inks, which we applied on glass slides by screen printing. In this work, we measured the relative spectral bidirectional reflection distribution of Red, Green, Blue and White reflective inks. The measurements were conducted on an experimental set up consisting of a goniometer, spectrometer, and a xenon light source. Based on the measurements, we simulate the reflectance spectra under diffuse illumination and demonstrate ratios of red, green, and blue spectral components for different observation angles of individual inks and their combinations.

  • 113.
    Pranovich, Alina
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Valyukh, Sergiy
    Linköping University, Department of Physics, Chemistry and Biology, Thin Film Physics. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Frisvad, Jeppe Revall
    Technical University of Denmark, Denmark.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Dot Off Dot Screen Printing with RGBW Reflective Inks2023In: Journal of Imaging Science and Technology, ISSN 1062-3701, E-ISSN 1943-3522, Vol. 67, no 3, article id 030404Article in journal (Refereed)
    Abstract [en]

    Recent advances in pigment production resulted in the possibility to print with RGBW primaries instead of CMYK and performing additive color mixing in printing. The RGBW pigments studied in this work have the properties of structural colors, as the primary colors are a result of interference in a thin film coating of mica pigments. In this work, we investigate the angle-dependent gamut of RGBW primaries. We have elucidated optimal angles of illumination and observation for each primary ink and found the optimal angle of observation under diffuse illumination. We investigated dot off dot halftoned screen printing with RGBW inks on black paper and in terms of angle-dependent dot gain. Based on our observations, optimal viewing condition for the given RGBW inks is in a direction of around 30◦ to the surface normal. Here, the appearance of the resulting halftoned prints can be estimated well by Neugebauer formula (weighted averaging of the individual reflected spectra). Despite the negative physical dot gain during the dot off dot printing, we observe angularly dependent positive optical dot gain for halftoned prints. Application of interference RGBW pigments in 2.5D and 3D printing is not fully explored due to the technological limitations. In this work, we provide colorimetric data for efficient application of the angle-dependent properties of such pigments in practical applications.

  • 114. Qu, Yuanyuan
    et al.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Simple Spectral Color Prediction Model using Multiple Characterization Curves2013Conference paper (Refereed)
  • 115.
    Rundquist, Alfred
    Linköping University, Department of Science and Technology.
    Color halftoning methods for screen printing and special effect pigments: Reproducing iridescent colors2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Iridescence is the property that makes colors vary by angle of observation. Technology has made it possible to print with ink that has this anisotropic property. The ink that is used contains microparticles and therefore only specific printing methods can be used, for example screen printing. This comes with new demands on halftones, due to its procedure; dots cannot be too small, and keeping structures should be prioritized. To minimize costs and time, prints should be simulated before sending an order. In this thesis, different methods to halftone for screen printing with iridescent ink are developed. Three existing methods are compared to methods that are developed for this specific cause. The result is presented as a 2D-mask and as an interactive 3D-simulation, using data measured from real ink. Also, properties of iridescence are analyzed to separate it from diffuse colors. An OpenGL simulation tool was developed for simulating halftones on 3D-models. The ink reflectance spectra are represented by polynomials that take as input the wavelength and observation angle. Given an observation angle, a spectrum can be found, which can be converted to RGB and set as the output color in the fragment shader. The program uses an RGB-mask which is the combined masks of the halftone. This is loaded as a texture which indicates what polynomial to use. There is no perfect method that works for all types of images. Images that contain colors similar to the inks benefit from morphological halftoning (enhanced structures) for iridescent areas, or foreground, and hatching (an angle dependant grid pattern) for diffuse areas, or background. Other images, that contain mixed colors as cyan, yellow or magenta, benefit froma hue separation error diffusion, where masks are created by thresholding the hue where the mixed colors are error diffused.The actual prints confirmed the hypothesis that tone reproduction can be of lower priority while structure and depth preservance should be highly prioritized. Iridescent colors can be separated from diffuse colors by thresholding the saturation combined with value, or the lightness of the Lab-representation

    Download full text (pdf)
    fulltext
  • 116.
    Rönnberg, Niklas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sonification supports perception of brightness contrast2019In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 13, p. 373-381, article id 4Article in journal (Refereed)
    Abstract [en]

    In complex visual representations, there are several possible challenges for the visual perception that might be eased by adding sound as a second modality (i.e. sonification). It was hypothesized that sonification would support visual perception when facing challenges such as simultaneous brightness contrast or the Mach band phenomena. This hypothesis was investigated with an interactive sonification test, yielding objective measures (accuracy and response time) as well as subjective measures of sonification benefit. In the test, the participant’s task was to mark the vertical pixel line having the highest intensity level. This was done in a condition without sonification and in three conditions where the intensity level was mapped to different musical elements. The results showed that there was a benefit of sonification, with higher accuracy when sonification was used compared to no sonification. This result was also supported by the subjective measurement. The results also showed longer response times when sonification was used. This suggests that the use and processing of the additional information took more time, leading to longer response times but also higher accuracy. There were no differences between the three sonification conditions.

    Download full text (pdf)
    fulltext
  • 117.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Johansson, Jimmy
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sonification Support for Information Visualization Dense Data Displays2016In: InfoVis Papers 2016, 2016Conference paper (Refereed)
    Abstract [en]

    This poster presents an experiment designed to evaluate the possible benefits of sonification in information visualization. It is hypothesized, that by using musical sounds for sonification when visualizing complex data, interpretation and comprehension of the visual representation could be increased. In this evaluation of sonification in parallel coordinates and scatter plots, participants had to identify and mark different density areas in the representations. Both quantitative and qualitative results suggest a benefit of sonification. These results indicate that sonification might be useful for data exploration, and give rise to new research questions and challenges.

  • 118.
    Salomonsson, Fredrik
    Linköping University, Department of Science and Technology.
    PIC/FLIP Fluid Simulation Using Block-Optimized Grid Data Structure2011Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis work will examin and present how to implement a Particle-In-Cell and a Fluid-Implicit-Particle (PIC / FLIP) fluid solver that takes advantage of the inherent parallelism of Digital Domain's sparse block optimized data structure, DB-Grid. The methods offer a hybrid approach between particle and grid based simulation.

    This thesis will also discuss and go through different approaches for storing and accessing the data associated with each particle. For dynamically create and remove attributes from the particles, Disney's open source API, Partio is used. Which is also used for saving the particles to disk.

    Finally how to expose C++ classes into Python by wrapping everything into a Python module using the Boost.Python API and discuss the benets of having a script language.

    Download full text (pdf)
    fulltext
  • 119.
    Samadzadegan, Sepideh
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Automatic and Adaptive Red Eye Detection and Removal: Investigation and Implementation2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Redeye artifact is the most prevalent problem in the flash photography, especially using compact cameras with built-in flash, which bothers both amateur and professional photographers. Hence, removing the affected redeye pixels has become an important skill. This thesis work presents a completely automatic approach for the purpose of redeye detection and removal and it consists of two modules: detection and correction of the redeye pixels in an individual eye, detection of two red eyes in an individual face.This approach is considered as a combination of some of the previous attempts in the area of redeye removal together with some minor and major modifications and novel ideas. The detection procedure is based on the redness histogram analysis followed by two adaptive methods, general and specific approaches, in order to find a threshold point. The correction procedure is a four step algorithm which does not solely rely on the detected redeye pixels. It also applies some more pixel checking, such as enlarging the search area and neighborhood checking, to improve the reliability of the whole procedure by reducing the image degradation risk. The second module is based on a skin-likelihood detection algorithm. A completely novel approach which is utilizing the Golden Ratio in order to segment the face area into some specific regions is implemented in the second module. The proposed method in this thesis work is applied on more than 40 sample images; by considering some requirements and constrains, the achieved results are satisfactory.

    Download full text (pdf)
    fulltext
  • 120.
    Samini, Ali
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Perspective Correct Hand-held Augmented Reality for Improved Graphics and Interaction2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    With Augmented Reality, also termed AR, a view of the real world is augmented by superimposing computer-generated graphics, thereby enriching or enhancing the perception of the reality. Today, lots of applications benefit from AR in different areas, such as education, medicine, navigation, construction, gaming, and multiple other areas, using primarily head-mounted AR displays and AR on hand-held smart devices. Tablets and phones are highly suitable for AR, as they are equipped with high resolution screens, good cameras and powerful processing units, while being readily available to both industry and home use. They are used with video see-through AR, were the live view of the world is captured by a camera in real time and subsequently presented together with the computer graphics on the display.

    In this thesis I put forth our recent work on improving video see-through Augmented Reality graphics and interaction for hand-held devices by applying and utilizing user perspective. On the rendering side, we introduce a geometry-based user perspective rending method aiming to align the on screen content with the real view of the world visible around the screen. Furthermore, we introduce a device calibration system to compensate for misalignment between system parts. On the interaction side we introduce two wand-like direct 3D pose manipulation techniques based on this user perspective. We also modified a selection technique and introduced a new one suitable to be used with our introduced manipulation techniques. Finally, I present several formal user studies, evaluating the introduced techniques and comparing them with concurrent state-of-the-art alternatives.

    List of papers
    1. A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality
    Open this publication in new window or tab >>A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality
    2014 (English)In: VRST '14 Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, SPRINGER-VERLAG BERLIN , 2014, p. 207-208Conference paper, Published paper (Refereed)
    Abstract [en]

    Video see-through Augmented Reality (V-AR) displays a video feed overlaid with information, co-registered with the displayed objects. In this paper we consider the type of V-AR that is based on a hand-held device with a fixed camera. In most of the VA-R applications the view displayed on the screen is completely determined by the orientation of the camera, i.e., the device-perspective rendering; the screen displays what the camera sees. The alternative method is to use the relative pose of the user's view and the camera, i.e., the user-perspective rendering. In this paper we present an approach to the user perspective V-AR using 3D projective geometry. The view is adjusted to the user's perspective and rendered on the screen, making it an augmented window. We created and tested a running prototype based on our method.

    Place, publisher, year, edition, pages
    SPRINGER-VERLAG BERLIN, 2014
    Keywords
    Augmented Reality; Video see-through; Dynamic frustum; User-perspective
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-123167 (URN)10.1145/2671015.2671127 (DOI)000364709300012 ()978-1-4503-3253-8 (ISBN)
    Conference
    The ACM Symposium on Virtual Reality Software and Technology (VRST) 2014
    Available from: 2015-12-07 Created: 2015-12-04 Last updated: 2018-05-23Bibliographically approved
    2. Device Registration for 3D Geometry-Based User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    Open this publication in new window or tab >>Device Registration for 3D Geometry-Based User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    2015 (English)In: AUGMENTED AND VIRTUAL REALITY, AVR 2015, SPRINGER-VERLAG BERLIN , 2015, Vol. 9254, p. 151-167Conference paper, Published paper (Refereed)
    Abstract [en]

    User-perspective rendering in Video See-through Augmented Reality (V-AR) creates a view that always shows what is behind the screen, from the users point of view. It is used for better registration between the real and virtual world instead of the traditional device-perspective rendering which displays what the camera sees. There is a small number of approaches towards user-perspective rendering that over all improve the registration between the real world, the video captured from real world that is displayed on the screen and the augmentations. There are still some registration errors that cause misalignment in the user-perspective rendering. One source of error is from the device registration which, based on the used tracking method, can be the misalignment between the camera and the screen and also the tracked frame of reference that the screen and the camera are attached to it. In this paper we first describe a method for the user perspective V-AR based on 3D projective geometry. We then address the device registration problem in user perspective rendering by presenting two methods: First, for estimating the misalignment between the camera and the screen. Second, for estimating the misalignment between the camera and the tracked frame.

    Place, publisher, year, edition, pages
    SPRINGER-VERLAG BERLIN, 2015
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 (print), 1611-3349 (online) ; 9254
    Keywords
    Augmented Reality; Video see-through; Dynamic frustum; User-perspective
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-123167 (URN)10.1007/978-3-319-22888-4_12 (DOI)000364709300012 ()978-3-319-22888-4; 978-3-319-22887-7 (ISBN)
    Conference
    2nd International Conference on Augmented and Virtual Reality (SALENTO AVR)
    Available from: 2015-12-07 Created: 2015-12-04 Last updated: 2018-05-23
    3. A User Study on Touch Interaction for User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    Open this publication in new window or tab >>A User Study on Touch Interaction for User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality
    2016 (English)In: Augmented Reality, Virtual Reality, and Computer Graphics: Third International Conference, AVR 2016, Lecce, Italy, June 15-18, 2016. Proceedings, Part II / [ed] Lucio Tommaso De Paolis, Antonio Mongelli, Springer, 2016, p. 304-317Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents a user study on touch interaction with hand-held Video See-through Augmented Reality (V-AR). In particular, the commonly used Device Perspective Rendering (DPR) is compared with User Perspective Rendering (UPR) with respect to both performance and user experience and preferences. We present two user study tests designed to mimic the tasks that are used in various AR applications.

    Looking for an object and selecting when it’s found, is one of the most used tasks in AR software. Our first test focuses on comparing UPR and DPR in a simple find and selection task. Manipulating the pose of a virtual object is another commonly used task in AR. The second test focuses on multi-touch interaction for 6 DoF object pose manipulation through UPR and DPR.

    Place, publisher, year, edition, pages
    Springer, 2016
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9769
    Keywords
    User perspective rendering, Augmented reality, Touch interaction, Video see-through
    National Category
    Human Computer Interaction Interaction Technologies Computer Sciences Media and Communication Technology Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-132956 (URN)10.1007/978-3-319-40651-0_25 (DOI)000389495700025 ()978-3-319-40651-0 (ISBN)978-3-319-40650-3 (ISBN)
    Conference
    Third International Conference on Augmented Reality, Virtual Reality and Computer Graphics (SALENTO AVR 2016), Otranto, Lecce, Italy, June 15-18, 2016
    Available from: 2016-12-05 Created: 2016-12-05 Last updated: 2018-05-23Bibliographically approved
    4. A study on improving close and distant device movement pose manipulation for hand-held augmented reality
    Open this publication in new window or tab >>A study on improving close and distant device movement pose manipulation for hand-held augmented reality
    2016 (English)In: VRST '16 Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, ACM Press, 2016, p. 121-128Conference paper, Published paper (Refereed)
    Abstract [en]

    Hand-held smart devices are equipped with powerful processing units, high resolution screens and cameras, that in combination makes them suitable for video see-through Augmented Reality. Many Augmented Reality applications require interaction, such as selection and 3D pose manipulation. One way to perform intuitive, high precision 3D pose manipulation is by direct or indirect mapping of device movement.

    There are two approaches to device movement interaction; one fixes the virtual object to the device, which therefore becomes the pivot point for the object, thus makes it difficult to rotate without translate. The second approach avoids latter issue by considering rotation and translation separately, relative to the object's center point. The result of this is that the object instead moves out of view for yaw and pitch rotations.

    In this paper we study these two techniques and compare them with a modification where user perspective rendering is used to solve the rotation issues. The study showed that the modification improves speed as well as both perceived control and intuitiveness among the subjects.

    Place, publisher, year, edition, pages
    ACM Press, 2016
    Keywords
    device interaction, augmented reality, video seethrough, user-perspective, device perspective, user study
    National Category
    Other Engineering and Technologies not elsewhere specified
    Identifiers
    urn:nbn:se:liu:diva-132954 (URN)10.1145/2993369.2993380 (DOI)000391514400018 ()978-1-4503-4491-3 (ISBN)
    Conference
    The 22nd ACM Symposium on Virtual Reality Software and Technology (VRST), Munich, Germany, November 02-04, 2016
    Available from: 2016-12-05 Created: 2016-12-05 Last updated: 2018-05-23Bibliographically approved
    5. Popular Performance Metrics for Evaluation of Interaction in Virtual and Augmented Reality
    Open this publication in new window or tab >>Popular Performance Metrics for Evaluation of Interaction in Virtual and Augmented Reality
    2017 (English)In: 2017 International Conference on Cyberworlds (CW) (2017), IEEE Computer Society, 2017, p. 206-209Conference paper, Published paper (Refereed)
    Abstract [en]

    Augmented and Virtual Reality applications provide environments in which users can immerse themselves in a fully or partially virtual world and interact with virtual objects or user interfaces. User-based, formal evaluation is needed to objectively compare interaction techniques, and find their value in different use cases, and user performance metrics are the key to being able to compare those techniques in a fair and effective manner. In this paper we explore evaluation principles used for or developed explicitly for virtual environments, and survey quality metrics, based on 15 current, important publications on interaction techniques for virtual environments. We check, categorize and analyze the formal user studies, and establish and present baseline performance metrics used for evaluation on interaction techniques in VR and AR.

    Place, publisher, year, edition, pages
    IEEE Computer Society, 2017
    National Category
    Other Engineering and Technologies
    Identifiers
    urn:nbn:se:liu:diva-143586 (URN)10.1109/CW.2017.25 (DOI)000454996900035 ()978-1-5386-2089-2 (ISBN)978-1-5386-2090-8 (ISBN)
    Conference
    2017 International Conference on Cyberworlds (CW),Chester, United Kingdom Sept. 20, 2017 to Sept. 22, 2017
    Available from: 2017-12-11 Created: 2017-12-11 Last updated: 2020-07-02
    Download full text (pdf)
    Perspective Correct Hand-held Augmented Reality for Improved Graphics and Interaction
    Download (pdf)
    omslag
    Download (pdf)
    Erratum
    Download (png)
    presentationsbild
  • 121.
    Samini, Ali
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundin Palmerius, Karljohan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A perspective geometry approach to user-perspective rendering in hand-held video see-through augmented reality2014In: VRST '14 Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, SPRINGER-VERLAG BERLIN , 2014, p. 207-208Conference paper (Refereed)
    Abstract [en]

    Video see-through Augmented Reality (V-AR) displays a video feed overlaid with information, co-registered with the displayed objects. In this paper we consider the type of V-AR that is based on a hand-held device with a fixed camera. In most of the VA-R applications the view displayed on the screen is completely determined by the orientation of the camera, i.e., the device-perspective rendering; the screen displays what the camera sees. The alternative method is to use the relative pose of the user's view and the camera, i.e., the user-perspective rendering. In this paper we present an approach to the user perspective V-AR using 3D projective geometry. The view is adjusted to the user's perspective and rendered on the screen, making it an augmented window. We created and tested a running prototype based on our method.

    Download full text (pdf)
    fulltext
  • 122.
    Samini, Ali
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundin Palmerius, Karljohan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Device Registration for 3D Geometry-Based User-Perspective Rendering in Hand-Held Video See-Through Augmented Reality2015In: AUGMENTED AND VIRTUAL REALITY, AVR 2015, SPRINGER-VERLAG BERLIN , 2015, Vol. 9254, p. 151-167Conference paper (Refereed)
    Abstract [en]

    User-perspective rendering in Video See-through Augmented Reality (V-AR) creates a view that always shows what is behind the screen, from the users point of view. It is used for better registration between the real and virtual world instead of the traditional device-perspective rendering which displays what the camera sees. There is a small number of approaches towards user-perspective rendering that over all improve the registration between the real world, the video captured from real world that is displayed on the screen and the augmentations. There are still some registration errors that cause misalignment in the user-perspective rendering. One source of error is from the device registration which, based on the used tracking method, can be the misalignment between the camera and the screen and also the tracked frame of reference that the screen and the camera are attached to it. In this paper we first describe a method for the user perspective V-AR based on 3D projective geometry. We then address the device registration problem in user perspective rendering by presenting two methods: First, for estimating the misalignment between the camera and the screen. Second, for estimating the misalignment between the camera and the tracked frame.

  • 123.
    Schlemmer, Michael
    et al.
    University of Kaiserslautern, Germany.
    Bertram, Martin Hering
    Wirtschaftsmathematik (ITWM) in Kaiserslautern, Germany..
    Hotz, Ingrid
    Berlin (ZIB), FU Berlin, Germany..
    Garth, Christoph
    University of California, Davis, CA..
    Kollmann, Wolfgang
    University of California, Davis, CA..
    Hamann, Bernd
    University of California, Davis, CA..
    Hagen, Hans
    University of Kaiserslautern.
    Moment Invariants for the Analysis of 2D Flow Fields2007In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 13, no 6, p. 1743-1750Article in journal (Refereed)
    Abstract [en]

    We present a novel approach for analyzing two-dimensional (2D) flow field data based on the idea of invariant moments. Moment invariants have traditionally been used in computer vision applications, and we have adapted them for the purpose of interactive exploration of flow field data. The new class of moment invariants we have developed allows us to extract and visualize 2D flow patterns, invariant under translation, scaling, and rotation. With our approach one can study arbitrary flow patterns by searching a given 2D flow data set for any type of pattern as specified by a user. Further, our approach supports the computation of moments at multiple scales, facilitating fast pattern extraction and recognition. This can be done for critical point classification, but also for patterns with greater complexity. This multi-scale moment representation is also valuable for the comparative visualization of flow field data. The specific novel contributions of the work presented are the mathematical derivation of the new class of moment invariants, their analysis regarding critical point features, the efficient computation of a novel feature space representation, and based upon this the development of a fast pattern recognition algorithm for complex flow structures.

  • 124.
    Ståhlbom, Emilia
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Sectra AB, Linkoping, Sweden.
    Molin, Jesper
    Sectra AB, Linkoping, Sweden.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Lundström, Claes
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). Sectra AB, Linkoping, Sweden.
    The thorny complexities of visualization research for clinical settings: A case study from genomics2023In: FRONTIERS IN BIOINFORMATICS, ISSN 2673-7647, Vol. 3, article id 1112649Article in journal (Refereed)
    Abstract [en]

    In this perspective article we discuss a certain type of research on visualization for bioinformatics data, namely, methods targeting clinical use. We argue that in this subarea additional complex challenges come into play, particularly so in genomics. We here describe four such challenge areas, elicited from a domain characterization effort in clinical genomics. We also list opportunities for visualization research to address clinical challenges in genomics that were uncovered in the case study. The findings are shown to have parallels with experiences from the diagnostic imaging domain.

  • 125.
    Sundén, Erik
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kottravel, Sathish
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. University of Ulm, Germany.
    Multimodal volume illumination2015In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 50, p. 47-60Article in journal (Refereed)
    Abstract [en]

    Despite the increasing importance of multimodal volumetric data acquisition and the recent progress in advanced volume illumination, interactive multimodal volume illumination remains an open challenge. As a consequence, the perceptual benefits of advanced volume illumination algorithms cannot be exploited when visualizing multimodal data - a scenario where increased data complexity urges for improved spatial comprehension. The two main factors hindering the application of advanced volumetric illumination models to multimodal data sets are rendering complexity and memory consumption. Solving the volume rendering integral by considering multimodal illumination increases the sampling complexity. At the same time, the increased storage requirements of multimodal data sets forbid to exploit precomputation results, which are often facilitated by advanced volume illumination algorithms to reduce the amount of per-frame computations. In this paper, we propose an interactive volume rendering approach that supports advanced illumination when visualizing multimodal volumetric data sets. The presented approach has been developed with the goal to simplify and minimize per-sample operations, while at the same time reducing the memory requirements. We will show how to exploit illumination-importance metrics, to compress and transform multimodal data sets into an illumination-aware representation, which is accessed during rendering through a novel light-space-based volume rendering algorithm. Both, data transformation and rendering algorithm, are closely intervened by taking compression errors into account during rendering. We describe and analyze the presented approach in detail, and apply it to real-world multimodal data sets from biology, medicine, meteorology and engineering.

  • 126.
    Sundén, Erik
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ropinski, Timo
    University of Ulm, Germany.
    Efficient Volume Illumination with Multiple Light Sources through Selective Light Updates2015In: 2015 IEEE Pacific Visualization Symposium (PacificVis), IEEE , 2015, p. 231-238Conference paper (Refereed)
    Abstract [en]

    Incorporating volumetric illumination into rendering of volumetric data increases visual realism, which can lead to improved spatial comprehension. It is known that spatial comprehension can be further improved by incorporating multiple light sources. However, many volumetric illumination algorithms have severe drawbacks when dealing with multiple light sources. These drawbacks are mainly high performance penalties and memory usage, which can be tackled with specialized data structures or data under sampling. In contrast, in this paper we present a method which enables volumetric illumination with multiple light sources without requiring precomputation or impacting visual quality. To achieve this goal, we introduce selective light updates which minimize the required computations when light settings are changed. We will discuss and analyze the novel concepts underlying selective light updates, and demonstrate them when applied to real-world data under different light settings.

  • 127.
    Thang, Kent
    et al.
    Linköping University, Department of Computer and Information Science.
    Nyberg, Adam
    Linköping University, Department of Computer and Information Science.
    Impact of fixed-rate fingerprinting defense on cloud gaming experience2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cloud gaming has emerged as a popular solution to meet the increasing hardware de-mands of modern video games, allowing players with dated or non-sufficient hardwareto access high-quality gaming experiences. However, the growing reliance on cloud ser-vices has led to heightened concerns regarding user privacy and the risk of fingerprintingattacks. In this paper, we investigate the effects of varying send rates on cloud gamingQoS and QoE metrics when applying a fixed-rate fingerprinting defense, BuFLO. Findingsshow that lower send rates impact both client-side and host-side applied defense differ-ently. Based on the results, specific send rates are suggested for maintaining a stable cloudgaming experience. The research offers insights into the trade-offs between security andperformance in cloud gaming and provides recommendations for mitigating fingerprintingattacks. Future work may investigate alternative defenses, device types, and connectionmethods.

    Download full text (pdf)
    fulltext
  • 128. Order onlineBuy this publication >>
    Tongbuasirilai, Tanaboon
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Data-Driven Approaches for Sparse Reflectance Modeling and Acquisition2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Photo-realistic rendering and predictive image synthesis are becoming increasingly important and utilized in many application areas ranging from production of visual effects and product visualization to digital design and the generation of synthetic data for visual machine learning applications. Many essential components of the realistic image synthesis pipelines have been developed tremendously over the last decades. One key component is accurate measurement, modeling, and simulation of how a surface material scatters light. The scattering of light at a point on a surface (reflectance and color) is described by the Bidirectional Reflectance Distribution Function (BRDF); which is the main research topic of this thesis. The BRDF describes how radiance, light, incident at a point on a surface is scattered towards any view-point from which the surface is observed. Accurate acquisition and representation of material properties play a fundamental role in photo-realistic image synthesis, and form a highly interesting research topic with many applications. 

    The thesis has explored and studied appearance modeling, sparse representation and sparse acquisition of BRDFs. The topics of this thesis cover two main areas. Within the first area, BRDF modeling, we propose several new BRDF models for accurate representation of material scattering behaviour using simple but efficient methods. The research challenges in BRDF modeling include tensor decomposition methods and sparse approximations based on measured BRDF data. The second part of the contributions focuses on sparse BRDF sampling and novel highly efficient BRDF acquisition. The sparse BRDF sampling is to tackle tedious and time-consuming processes for acquiring BRDFs. This challenging problem is addressed using sparse modeling and compressed sensing techniques and enables a BRDF to be measured and accurately reconstructed using only a small number of samples. Additionally, the thesis provides example applications based on the research, as well as a techniques for BRDF editing and interpolation. 

    Publicly available BRDF databases are a vital part of the data-driven methods proposed in this thesis. The measured BRDF data used has revealed insights to facilitate further development of the proposed methods. The results, algorithms, and techniques presented in this thesis demonstrate that there is a close connection between BRDF modeling and BRDF acquisition; efficient and accurate BRDF modeling is a by-product of sparse BRDF sampling. 

    List of papers
    1. Differential appearance editing for measured BRDFs
    Open this publication in new window or tab >>Differential appearance editing for measured BRDFs
    Show others...
    2016 (English)Conference paper, Oral presentation with published abstract (Other academic)
    Abstract [en]

    Data driven reflectance models using BRDF data measured from real materials, e.g. [Matusik et al. 2003], are becoming increasingly popular in product visualization, digital design and other applications driven by the need for predictable rendering and highly realistic results. Although recent analytic, parametric BRDFs provide good approximations for many materials, some effects are still not captured well [Löw et al. 2012]. Thus, it is hard to accurately model real materials using analytic models, even if the parameters are fitted to data. In practice, it is often desirable to apply small edits to the measured data for artistic purposes, or to model similar materials that are not available in measured form. A drawback of data driven models is that they are often difficult to edit and do not easily lend themselves well to artistic adjustments. Existing editing techniques for measured data [Schmidt et al. 2014], often use complex decompositions making them difficult to use in practice.

    Place, publisher, year, edition, pages
    New York, NY, USA: , 2016
    Series
    SIGGRAPH ’16
    Keywords
    data-driven BRDFs, material editing
    National Category
    Applied Mechanics
    Identifiers
    urn:nbn:se:liu:diva-163324 (URN)10.1145/2897839.2927455 (DOI)9781450342827 (ISBN)
    Conference
    THE 43RD INTERNATIONAL CONFERENCE AND EXHIBITION ON Computer Graphics & Interactive Techniques, ANAHEIM, CALIFORNIA, 24-28 JULY, 2016
    Funder
    Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Available from: 2020-05-19 Created: 2020-05-19 Last updated: 2022-12-28Bibliographically approved
    2. Efficient BRDF Sampling Using Projected Deviation Vector Parameterization
    Open this publication in new window or tab >>Efficient BRDF Sampling Using Projected Deviation Vector Parameterization
    2017 (English)In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 153-158Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents a novel approach for efficient sampling of isotropic Bidirectional Reflectance Distribution Functions (BRDFs). Our approach builds upon a new parameterization, the Projected Deviation Vector parameterization, in which isotropic BRDFs can be described by two 1D functions. We show that BRDFs can be efficiently and accurately measured in this space using simple mechanical measurement setups. To demonstrate the utility of our approach, we perform a thorough numerical evaluation and show that the BRDFs reconstructed from measurements along the two 1D bases produce rendering results that are visually comparable to the reference BRDF measurements which are densely sampled over the 4D domain described by the standard hemispherical parameterization.

    Place, publisher, year, edition, pages
    Institute of Electrical and Electronics Engineers (IEEE), 2017
    Series
    IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9936 ; 2017
    National Category
    Medical Laboratory and Measurements Technologies
    Identifiers
    urn:nbn:se:liu:diva-145821 (URN)10.1109/ICCVW.2017.26 (DOI)000425239600019 ()9781538610343 (ISBN)9781538610350 (ISBN)
    Conference
    16th IEEE International Conference on Computer Vision (ICCV), 22-29 October 2017, Venice, Italy
    Note

    Funding Agencies|Scientific and Technical Research Council of Turkey [115E203]; Scientific Research Projects Directorate of Ege University [2015/BIL/043]

    Available from: 2018-03-21 Created: 2018-03-21 Last updated: 2022-12-28Bibliographically approved
    3. Compact and intuitive data-driven BRDF models
    Open this publication in new window or tab >>Compact and intuitive data-driven BRDF models
    2020 (English)In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 36, no 4, p. 855-872Article in journal (Refereed) Published
    Abstract [en]

    Measured materials are rapidly becoming a core component in the photo-realistic image synthesis pipeline. The reason is that data-driven models can easily capture the underlying, fine details that represent the visual appearance of materials, which can be difficult or even impossible to model by hand. There are, however, a number of key challenges that need to be solved in order to enable efficient capture, representation and interaction with real materials. This paper presents two new data-driven BRDF models specifically designed for 1D separability. The proposed 3D and 2D BRDF representations can be factored into three or two 1D factors, respectively, while accurately representing the underlying BRDF data with only small approximation error. We evaluate the models using different parameterizations with different characteristics and show that both the BRDF data itself and the resulting renderings yield more accurate results in terms of both numerical errors and visual results compared to previous approaches. To demonstrate the benefit of the proposed factored models, we present a new Monte Carlo importance sampling scheme and give examples of how they can be used for efficient BRDF capture and intuitive editing of measured materials.

    Place, publisher, year, edition, pages
    Springer Berlin/Heidelberg, 2020
    Keywords
    Reflectance modeling, Rendering, Computer graphics
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-162427 (URN)10.1007/s00371-019-01664-z (DOI)000520835800015 ()
    Note

    Funding agencies: Swedish Science Council [VR-2015-05180]; strategic research environment ELLIIT; Scientific and Technical Research Council of TurkeyTurkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK) [115E203]; Scientific Research Projects Directorate of Ege Univers

    Available from: 2019-12-02 Created: 2019-12-02 Last updated: 2022-12-28Bibliographically approved
    4. A Sparse Non-parametric BRDF Model
    Open this publication in new window or tab >>A Sparse Non-parametric BRDF Model
    2022 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 41, no 5, article id 181Article in journal (Refereed) Published
    Abstract [en]

    This paper presents a novel sparse non-parametric Bidirectional Reflectance Distribution Function (BRDF) model derived using a machine learning approach to represent the space of possible BRDFs using a set of multidimensional sub-spaces, or dictionaries. By training the dictionaries under a sparsity constraint, the model guarantees high-quality representations with minimal storage requirements and an inherent clustering of the BDRF-space. The model can be trained once and then reused to represent a wide variety of measured BRDFs. Moreover, the proposed method is flexible to incorporate new unobserved data sets, parameterizations, and transformations. In addition, we show that any two, or more, BRDFs can be smoothly interpolated in the coefficient space of the model rather than the significantly higher-dimensional BRDF space. The proposed sparse BRDF model is evaluated using the MERL, DTU, and RGL-EPFL BRDF databases. Experimental results show that the proposed approach results in about 9.75dB higher signal-to-noise ratio on average for rendered images as compared to current state-of-the-art models.

    Place, publisher, year, edition, pages
    ASSOC COMPUTING MACHINERY, 2022
    Keywords
    Rendering; reflectance and shading models; machine learning; dictionary learning; non-parametric BRDF model; BRDF interpolation
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-190356 (URN)10.1145/3533427 (DOI)000885871900013 ()
    Note

    Funding Agencies|Knut and Alice Wallenberg Foundation (KAW); Wallenberg Autonomous Systems and Software Program (WASP); strategic research environment ELLIIT; EU H2020 Research, and Innovation Programme [694122]

    Available from: 2022-12-06 Created: 2022-12-06 Last updated: 2022-12-28
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 129.
    Trujillo-Vazquez, Abigail
    et al.
    Centre for Print Research, University of the West of England, Bristol, UK.
    Abedini, Fereshteh
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Pranovich, Alina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Parraman, Carinna
    Centre for Print Research, University of the West of England, Bristol, UK.
    Klein, Susanne
    Centre for Print Research, University of the West of England, Bristol, UK.
    Printing with tonalli: Reproducing Featherwork from Precolonial Mexico Using Structural Colorants2023In: Colorants, ISSN 2079-6447, Vol. 2, no 4, p. 632-653Article in journal (Refereed)
    Abstract [en]

    Two of the most significant cases of extant 16th-century featherwork from Mexico are the so-called Moctezuma’s headdress and the Ahuizotl shield. While the feathers used in these artworks exhibit lightfast colors, their assembly comprises mainly organic materials, which makes them extremely fragile. Printed media, including books, catalogs, educational materials, and fine copies, offer an accessible means for audiences to document and disseminate visual aspects of delicate cultural artifacts without risking their integrity. Nevertheless, the singular brightness and iridescent colors of feathers are difficult to communicate to the viewer in printed reproductions when traditional pigments are used. This research explores the use of effect pigments (multilayered reflective structures) and improved halftoning techniques for additive printing, with the objective of enhancing the reproduction of featherwork by capturing its changing color and improving texture representation via a screen printing process. The reproduced images of featherwork exhibit significant perceptual resemblances to the originals, primarily owing to the shared presence of structural coloration. We applied structure-aware halftoning to better represent the textural qualities of feathers without compromising the performance of effect pigments in the screen printing method. Our prints show angle-dependent color, although their gamut is reduced. The novelty of this work lies in the refinement of techniques for printing full-color images by additive printing, which can enhance the 2D representation of the appearance of culturally significant artifacts.

  • 130.
    Tsirikoglou, Apostolia
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Wrenninge, Magnus
    7D Labs.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Procedural modeling and physically based rendering for synthetic data generation in automotive applications2017Other (Other academic)
    Abstract [en]

    We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results.

    Download full text (pdf)
    fulltext
    Download (png)
    image
    Download (png)
    image
    Download (png)
    image
    Download (png)
    image
    Download (png)
    image
  • 131.
    Ubillis, Amaru
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Evaluation of Sprite Kit for iOS game development2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose with this thesis is to investigate whether Sprite Kit is a good tool to simplify the development process for game developers when making 2D games for mobile devices. To answer this question a simple turn based strategy game has been developed with Sprite Kit. Sprite Kit is a game engine for making 2D games released by Apple.

    Based on the experience I got during the development I will go through and discuss some of the most important tools provided by the game engine and how they helped us to complete our game.

    The conclusions I reached after making a game with Sprite Kit is that the frame- work provides all the tools necessary for creating a simple 2D mobile game for iOS. Sprite Kit hides much of the lower level details and gives the game de- veloper comprehensive development support. This helps the game developer to save a lot of time and focus more on the gameplay when creating a game. 

    Download full text (pdf)
    fulltext
  • 132.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology.
    An optical system for single image environment maps2007In: SIGGRAPH '07 ACM SIGGRAPH 2007 posters, ACM Press, 2007Conference paper (Refereed)
    Abstract [en]

    We present an optical setup for capturing a full 360° environment map in a single image snapshot. The setup, which can be used with any camera device, consists of a curved mirror swept around a negative lens, and is suitable for capturing environment maps and light probes. The setup achieves good sampling density and uniformity for all directions in the environment.

  • 133.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR, Germany.
    Kaiser, Gunnar
    SpheronVR, Germany.
    Next Generation Image Based Lighting using HDR Video2011In: Proceeding SIGGRAPH '11 ACM SIGGRAPH 2011 Talks, ACM Special Interest Group on Computer Science Education, 2011, p. article no 60-Conference paper (Refereed)
    Abstract [en]

    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:

    1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.

    2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.

    3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure.

    4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.

    We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

  • 134.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Spatially varying image based lighting using HDR-video2013In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed)
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

    Download full text (pdf)
    fulltext
  • 135.
    Vrotsou, Katerina
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Turkay, Cagatay
    Univ Warwick, England; Univ London, England; Harvard Univ, MA 02138 USA.
    Foreword to the Special Section on Visual Analytics2022In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 103, p. A3-A4Article in journal (Other academic)
  • 136.
    Wang, Xiyao
    et al.
    Univ Paris Saclay, France.
    Besancon, Lonni
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Rousseau, David
    Université Paris-Saclay, CNRS, IJCLab, Orsay, France.
    Sereno, Mickael
    Univ Paris Saclay, France.
    Ammi, Mehdi
    Univ Paris Saclay, France; Univ Paris 08, France.
    Isenberg, Tobias
    Univ Paris Saclay, France.
    Towards an Understanding of Augmented Reality Extensions for Existing 3D Data Analysis Tools2020In: PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI20), ASSOC COMPUTING MACHINERY , 2020Conference paper (Refereed)
    Abstract [en]

    We present an observational study with domain experts to understand how augmented reality (AR) extensions to traditional PC-based data analysis tools can help particle physicists to explore and understand 3D data. Our goal is to allow researchers to integrate stereoscopic AR-based visual representations and interaction techniques into their tools, and thus ultimately to increase the adoption of modern immersive analytics techniques in existing data analysis workflows. We use Microsofts HoloLens as a lightweight and easily maintainable AR headset and replicate existing visualization and interaction capabilities on both the PC and the AR view. We treat the AR headset as a second yet stereoscopic screen, allowing researchers to study their data in a connected multi-view manner. Our results indicate that our collaborating physicists appreciate a hybrid data exploration setup with an interactive AR extension to improve their understanding of particle collision events.

  • 137.
    Willfahrt, Andreas
    et al.
    Institute for Applied Research, Media University Stuttgart, Stuttgart, Germany .
    Hübner, Gunter
    Institute for Applied Research, Media University Stuttgart, Stuttgart, Germany .
    Optimization of aperture size and distance in the insulating mask of a five layer vertical stack forming a fully printed thermoelectric generator2011In: Advances in Printing and Media Technology Proceedings of the 38th International Research Conference of iarigai / [ed] Nils Enlund and Mladen Lovreček, International Association of Research Organizations for the Information, Media and Graphic Arts Industries , 2011, Vol. 38, p. 261-269Conference paper (Refereed)
    Abstract [en]

    Printed thermoelectric generators (TEG) combine the advantages of screen printing with the uncomplicated assembly and reliability of thermoelectric devices. Successively printed layers on top of each other are needed for a completed device in a vertical stack setup. One of the challenging layers is the insulating mask which provides cavities for the thermoelectric legs. By governing the thickness of this insulating mask the overall thickness of the TEG is determined, too. The spatial separation is a necessity for reasonable energy conversion efficiency.

  • 138.
    Willfahrt, Andreas
    et al.
    Stuttgart Media University, Hochschule der Medien (HdM), Stuttgart, Germany.
    Steiner, Erich
    Stuttgart Media University, Hochschule der Medien (HdM), Stuttgart, Germany.
    Model for calculation of design and electrical parameters of thermoelectric generators2012In: Journal of Print and Media Technology Research, ISSN 2223-8905, E-ISSN 2414-6250, Vol. 1, no 4, p. 247-257Article in journal (Refereed)
    Abstract [en]

    Energy harvesting - the conversion of ambient energy into electrical energy - is a frequently used term nowadays. Several conversion principles are available, e.g., photovoltaics, wind power and water power. Lesser-known are thermoelectric generators (TEG) although they were already studied actively during and after the world wars in the 20th century (Caltech Material Science, n. d.). In this work, the authors present a mathematical model for the calculation of input or output parameters of printed thermoelectric generators. The model is strongly related to existing models (Freunek et al., 2009; Rowe, 1995; Glatz et al., 2006) for conventionally produced TEGs as well as for printed TEGs. Thermal effects as investtigated by Freunek et al. (2009; 2010) could be included. In order to demonstrate the benefit of the model, two examples of calculations are presented. The parameters of the materials are derived from existing printing inks reported elsewhere (Chen et al., 2011; Wuesten and Potje-Kamloth, 2008; Zhang et al., 2010; Liu et al., 2011; Bubnova et al., 2011). The printing settings are chosen based on feasibility and convenience.

  • 139.
    Yang, Li
    et al.
    Karlstad University.
    Gooran, Sasan
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Eriksen, Magnus
    Johansson, Tobias
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Color Based Maximal GCR for Electrophotography2006In: IS&T Int. Conf. on Digital Printing Technologies (NIP22), The Society for Imaging Science and Technology , 2006, p. 394-397Conference paper (Other academic)
    Abstract [en]

    The underline idea of grey component replacement (GCR) is to replace a mixture of primary colors (cyan, magenta, and yellow) by a black. Current algorithms of GCR are mainly based on the concept of equal-tone-value-reduction or mixing equal amount (tone value) of primary colors generating gray, which in turn can be represented by the same amount of black. As the colors used are usually non-ideal, such a replacement can result in remarkable color deviation.    

    We proposed an algorithm of maximal GCR based on color matching, i.e. the black is introduced in a way that preserves the color (before and after GCR). In the algorithm, the primary with smallest tonal value is set to be zero (tone value) while the other two are reduced according to the color matching calculations. To achieve a real color matching of print, dot gain effects have been considered in the calculation. The proposed algorithm has been tested successfully for FM halftoning using an electrophotographic printer.   

  • 140.
    Yang, Li
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Kruse, Björn
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Yule-Nielsen Effect and Ink-penetration in Multi-chromatic Tone Reproduction2000In: IS & T's NIP16: International Conference on Digital Printing Technologies, 2000, p. 363-366Conference paper (Other academic)
    Abstract [en]

    A framework describing influences of ink penetration and Yule-Nielsen effect on the reflectance and tristimulus values of a halftone sample has been proposed. General expressions of the reflectance values and CIEXYZ tristimulus values have been derived. Simulations for images printed with two inks have been carried out by applying Gaussian type of point spread function (PSF). Dependence of Yule-Nielsen effect on the optical properties of substrate, inks, the dot geometry, ink penetration etc., have been discussed.

  • 141.
    Yu, Peilin
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nordman, Aida
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Meyer, Lothar
    Swedish Air Nav Serv, LFV, Linkoping, Sweden.
    Boonsong, Supathida
    Swedish Air Nav Serv, LFV, Linkoping, Sweden.
    Vrotsou, Katerina
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Medicine and Health Sciences.
    Interactive Transformations and Visual Assessment of Noisy Event Sequences: An Application in En-Route Air Traffic Control2023In: 2023 IEEE 16TH PACIFIC VISUALIZATION SYMPOSIUM, PACIFICVIS, IEEE COMPUTER SOC , 2023, p. 92-101Conference paper (Refereed)
    Abstract [en]

    Real-world event sequence data, such as activity logs, eye-tracking data, simulation data, and electronic health records, often share characteristics such as a large alphabet of events, fragmentation, noise, and high complexity which makes them difficult to analyze in their raw form. Because of this, simplification and preprocessing through various data transformations are commonly required before the data can be effectively visualized and analyzed. Existing methods for such data transformation are either manually applied and rely heavily on user expertise, or use algorithmic approaches to apply bulk operations which can imply the loss of potentially important information without users being aware. To bridge this gap, we propose a visual analytics approach that aims to successively increase the quality of noisy event sequences by supporting an interactive, context-aware application of data transformations. This is achieved by providing cues concerning the potential loss of information that transformation operations may imply and allowing users to explore, and visually assess their impact on the data. Therefore, a central feature of the approach is that users can tune the data transformation process so that important identified data characteristics are preserved. We motivate the proposed approach in the domain of air traffic control and illustrate it through a usage example, using event sequences derived by merging eye-tracking and simulator data from a human-in-the-loop simulation experiment with 14 air traffic controllers.

  • 142.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Multilevel Halftoning and Color Separation for Eight-Channel Printing2016In: Journal of Imaging Science and Technology, ISSN 1062-3701, E-ISSN 1943-3522, Vol. 60, no 5, article id 50403Article in journal (Refereed)
    Abstract [en]

    Multichannel printing employs additional colorants to achieve higher quality reproduction, assuming their physical overlap restrictions are met. These restrictions are commonly overcome in the printing workflow by controlling the colorant choice at each point. Our multilevel halftoning algorithm bundles inks of same hues in one channel with no overlap, separating them into eight channels, consequentially benefitting of increased ink options at each point. In this article, implementation and analysis of the algorithm is carried out. Color separation is performed using the cellular Yule‐Nielsen modified spectral Neugebauer model. The channels are binarized with the multilevel halftoning algorithm. The workflow is evaluated with an eight-channel inkjet at 600 dpi resulting in mean and maximum ΔE 94 color differences around 1 and 2, respectively. The halftoning algorithm is analyzed using S-CIELAB, thus involving the human visual system, in which multilevel halftoning showed improvement in terms of image quality compared to the conventional approach.

  • 143.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Multilevel halftoning applied to achromatic inks in multi-channel printing2014In: Abstracts from 41st International research conference of iarigai: Advances in Printing and Media Technology,  Print and media research for the benefit of industry and society, 2014, p. 25-25Conference paper (Other academic)
    Abstract [en]

    Printing using more than four ink channels visually improves the reproduction. Nevertheless, if the ink layer thickness at any given point exceeds a certain limit, ink bleeding and colour accuracy problems would occur. Halftoning algorithms that process channels dependently are one way of dealing with this shortcoming of multi-channel printing. A multilevel halftoning algorithm that processes a channel so that it is printed with multiple inks of same chromatic value was introduced in our research group. Here we implement this multilevel algorithm using three achromatic inks – photo grey, grey, black – in a real paper-ink setup. The challenges lay in determining the thresholds for ink separation and in dot gain compensation. Dot gain results in a darker reproduction and since it originates from the interaction between a specific ink and paper, compensating the original image for multilevel halftone means expressing dot gain of three inks in terms of the nominal coverage of a single ink. Results prove a successful multilevel halftone implementation workflow using multiple inks while avoiding dot-on-dot placement and accounting for dot gain. Results show the multilevel halftoned image is visually improved in terms of graininess and detail enhancement when compared to the bi-level halftoned image.

    Download full text (pdf)
    fulltext
  • 144.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    MULTILEVEL HALFTONING AS AN ALGORITHM TO CONTROL INK OVERLAP IN MULTI-CHANNEL PRINTING2015In: 2015 COLOUR AND VISUAL COMPUTING SYMPOSIUM (CVCS), IEEE , 2015Conference paper (Refereed)
    Abstract [en]

    A multilevel halftoning algorithm can be used to overcome some of the challenges of multi-channel printing. In this algorithm, each channel is processed so that it can be printed using multiple inks of approximately the same hue, achieving a single ink layer. The computation of the threshold values required for ink separation and dot gain compensation pose an interesting challenge. Since the dot gain depends on the specific combination of ink, paper and print resolution, compensating the original image for multilevel halftoning means expressing the dot gain of multiple inks of same hue in terms of the coverage of a single ink. The applicability of the proposed multilevel halftoning workflow is demonstrated using chromatic inks while avoiding dot overlap and accounting for dot gain. The results indicate that the multilevel halftoned image is visually improved in terms of graininess when compared to bi-level halftoned images.

  • 145.
    Zitinski Elias, Paula
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Nyström, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gooran, Sasan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Multi-channel printing by orthogonal and non-orthogonal AM halftoning2013In: Proceedings of 12th International AIC Colour Congress: Bringing Colour to Life, Newcastle, UK, 2013Conference paper (Refereed)
    Abstract [en]

    Multi-channel printing with more than the conventional four colorants brings numerous advantages, but also challenges, like implementation of halftone algorithms. This paper concentrates on amplitude modulated (AM) halftoning for multi-channel printing. One difficulty is the correct channel rotation to avoid the moiré effect and to achieve colour fidelity in case of misregistration. 20 test patches were converted to seven-channel images and AM halftoning was applied using two different approaches in order to obtain a moiré-free impression. One method was to use orthogonal screens and adjust the channels by overlapping the pairs of complimentary colours, while the second was to implement non-orthogonal halftone screens (ellipses). By doing so, a wider angle range is available to accommodate a seven-channel impression. The performance was evaluated by simulating misregistration in both position and angle for a total of 1600 different scenarions. ΔE values were calculated between the misregistered patches and the correct ones, for both orthogonal and non-orthogonal screens. Results show no visible morié and improvement in colour fidelity when using non-orthogonal screens for seven-channel printing, producing smaller colour differences in case of misregistration.

  • 146. Order onlineBuy this publication >>
    Žitinski Elías, Paula
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Improving image quality in multi-channel printing - multilevel halftoning, color separation and graininess characterization2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Color printing is traditionally achieved by separating an input image into four channels (CMYK) and binarizing them using halftoning algorithms, in order to designate the locations of ink droplet placement. Multi-channel printing means a reproduction that employs additional inks other than these four in order to augment the color gamut (scope of reproducible colors) and reduce undesirable ink droplet visibility, so-called graininess.

    One aim of this dissertation has been to characterize a print setup in which both the primary inks CMYK and their light versions are used. The presented approach groups the inks, forming subsets, each representing a channel that is reproduced with multiple inks. To halftone the separated channels in the present methodology, a specific multilevel halftoning algorithm is employed, halftoning each channel to multiple levels. This algorithm performs the binarization from the ink subsets to each separate colorant. Consequently, the print characterization complexity remains unaltered when employing the light inks, avoiding the normal increase in computational complexity, the one-to-many mapping problem and the increase in the number of training samples. The results show that the reproduction is visually improved in terms of graininess and detail enhancement.

    The secondary color inks RGB are added in multi-channel printing to increase the color gamut. Utilizing them, however, potentially increases the perceived graininess. Moreover, employing the primary, secondary and light inks means a color separation from a three-channel CIELAB space into a multi-channel colorant space, resulting in colorimetric redundancy in which multiple ink combinations can reproduce the same target color. To address this, a proposed cost function is incorporated in the color separation approach, weighting selected factors that influence the reproduced image quality, i.e. graininess and color accuracy, in order to select the optimal ink combination. The perceived graininess is modeled by employing S-CIELAB, a spatial low-pass filtering mimicking the human visual system. By applying the filtering to a large dataset, a generalized prediction that quantifies the perceived graininess is carried out and incorporated as a criterion in the color separation.

    Consequently, the presented research increases the understanding of color reproduction and image quality in multi-channel printing, provides concrete solutions to challenges in the practical implementation, and rises the possibilities to fully utilize the potential in multi-channel printing for superior image quality.

    Download full text (pdf)
    Improving image quality in multi-channel printing - multilevel halftoning, color separation and graininess characterization
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 147.
    Berglind, Anna (Artist, Photographer)
    Linköping University, Department of Culture and Society, Division of Culture, Society, Design and Media. Linköping University, Faculty of Arts and Sciences. Mälardalens universitet.
    Lindell, Rikard (Artist, Sound designer)
    Mälardalens universitet.
    Berglind Mörlid, Thilda (Photographer)
    Berglind, Mira (Photographer)
    Till minne av skogen: interaktiv webb-utställnig2020Artistic output (Refereed)
123 101 - 147 of 147
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf