liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 94
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Ahmad, Tausif
    et al.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Hybrid Color Halftoning2010Konferensbidrag (Refereegranskat)
  • 2.
    Berkesand, Peter
    Linköpings universitet, Universitetsbiblioteket.
    Teknisk utvärdering av elektroniska publiceringsplattformar2003Rapport (Övrigt vetenskapligt)
    Abstract [sv]

    I samband med universitetets utredning Universitetets informationsförsörjning tillsattes resursgrupper. En av dessa grupper var tekniska gruppen som fick i uppgift att se över vilka publiceringssystem som finns.

    Gruppen har undersökt vilka publiceringsplattformar tillgängliga för elektronisk publicering av vetenskapliga publikationer som kan vara intressanta för E-press. Inom gruppen har diskussioner förts kring de olika plattformarna.

    Vid studiebesök och egna tester framstår för närvarande DiVA som det mest genomarbetade systemet för elektronisk publicering. Vi får ett helt igenom färdigt publiceringssystem som går att använda efter några justeringar för att anpassa det till våra behov.

    Både EPrints och DSpace bör utvärderas mer. I synnerhet DSpace som inte ännu används i Sverige. Eventuellt kommer både DSpace och EPrints att installeras och utvärderas närmare under hösten 2003.

  • 3.
    Bruckner, Stefan
    et al.
    Department of Informatics, University of Bergen, Bergen, Norway.
    Isenberg, Tobias
    AVIZ, INRIA, Saclay, France.
    Ropinski, Timo
    Institute of Media Informatics / Visual Computing Research Group, Ulm University, Ulm, Germany.
    Wiebel, Alexander
    Department of Computer Science, Hochschule Worms, 52788 Worms, Germany.
    A Model of Spatial Directness in Interactive Visualization2018Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.

  • 4.
    Bång, Magnus
    et al.
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Ragnemalm, Eva L.Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska högskolan.
    Persuasive Technology: Design for Health and Safety: 7th International Conference on Persuasive Technology, PERSUASIVE 2012, Linköping, Sweden, June 6-8, 2012. Proceedings2012Proceedings (redaktörskap) (Refereegranskat)
    Abstract [en]

    This book constitutes the proceedings of the 7th International Conference on Persuasive Technology, PERSUASIVE 2012, held in Linköping, Sweden, in June 2012. The 21 full papers presented together with 5 short papers were carefully reviewed and selected from numerous submissions. In addition three keynote papers are included in this volume. The papers cover the typical fields of persuasive technology, such as health, safety and education.

  • 5.
    Chien, Trinh Van
    et al.
    Linköpings universitet, Institutionen för systemteknik, Kommunikationssystem. Linköpings universitet, Tekniska fakulteten. Sungkyunkwan University, South Korea.
    Dinh, Khanh Quoc
    ungkyunkwan Univ, Sch Elect & Comp Engn, Seoul, South Korea.
    Jeon, Byeungwoo
    Sungkyunkwan Univ, Sch Elect & Comp Engn, Seoul, South Korea.
    Burger, Martin
    University of Munster, Germany.
    Block compressive sensing of image and video with nonlocal Lagrangian multiplier and patch-based sparse representation2017Ingår i: Signal processing. Image communication, ISSN 0923-5965, E-ISSN 1879-2677, Vol. 54, s. 93-106Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Although block compressive sensing (BCS) makes it tractable to sense large-sized images and video, its recovery performance has yet to be significantly improved because its recovered images or video usually suffer from blurred edges, loss of details, and high-frequency oscillatory artifacts, especially at a low subrate. This paper addresses these problems by designing a modified total variation technique that employs multi-block gradient processing, a denoised Lagrangian multiplier, and patch-based sparse representation. In the case of video, the proposed recovery method is able to exploit both spatial and temporal similarities. Simulation results confirm the improved performance of the proposed method for compressive sensing of images and video in terms of both objective and subjective qualities.

  • 6.
    Chow, Joyce A
    et al.
    RISE Interactive Institute, Norrköping, Sweden.
    Törnros, Martin E
    Interaktiva Rum Sverige, Gothenburg, Sweden.
    Waltersson, Marie
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Richard, Helen
    Region Östergötland, Diagnostikcentrum, Klinisk patologi.
    Kusoffsky, Madeleine
    RISE Interactive Institute, Norrköping, Sweden.
    Lundström, Claes
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Sectra AB, Linköping, Sweden.
    Kurti, Arianit
    RISE Interactive Institute, Norrköping, Sweden.
    A Design Study Investigating Augmented Reality and Photograph Annotation in a Digitalized Grossing Workstation2017Ingår i: Journal of Pathology Informatics, ISSN 2229-5089, E-ISSN 2153-3539, Vol. 8, nr 31Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Context: Within digital pathology, digitalization of the grossing procedure has been relatively underexplored in comparison to digitalization of pathology slides. 

    Aims: Our investigation focuses on the interaction design of an augmented reality gross pathology workstation and refining the interface so that information and visualizations are easily recorded and displayed in a thoughtful view. 

    Settings and Design: The work in this project occurred in two phases: the first phase focused on implementation of an augmented reality grossing workstation prototype while the second phase focused on the implementation of an incremental prototype in parallel with a deeper design study. 

    Subjects and Methods: Our research institute focused on an experimental and “designerly” approach to create a digital gross pathology prototype as opposed to focusing on developing a system for immediate clinical deployment. 

    Statistical Analysis Used: Evaluation has not been limited to user tests and interviews, but rather key insights were uncovered through design methods such as “rapid ethnography” and “conversation with materials”. 

    Results: We developed an augmented reality enhanced digital grossing station prototype to assist pathology technicians in capturing data during examination. The prototype uses a magnetically tracked scalpel to annotate planned cuts and dimensions onto photographs taken of the work surface. This article focuses on the use of qualitative design methods to evaluate and refine the prototype. Our aims were to build on the strengths of the prototype's technology, improve the ergonomics of the digital/physical workstation by considering numerous alternative design directions, and to consider the effects of digitalization on personnel and the pathology diagnostics information flow from a wider perspective. A proposed interface design allows the pathology technician to place images in relation to its orientation, annotate directly on the image, and create linked information. 

    Conclusions: The augmented reality magnetically tracked scalpel reduces tool switching though limitations in today's augmented reality technology fall short of creating an ideal immersive workflow by requiring the use of a monitor. While this technology catches up, we recommend focusing efforts on enabling the easy creation of layered, complex reports, linking, and viewing information across systems. Reflecting upon our results, we argue for digitalization to focus not only on how to record increasing amounts of data but also how these data can be accessed in a more thoughtful way that draws upon the expertise and creativity of pathology professionals using the systems.

  • 7.
    Eilertsen, Gabriel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska fakulteten.
    Denes, Gyorgy
    University of Cambridge, England.
    Mantiuk, Rafal K.
    University of Cambridge, England.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    HDR image reconstruction from a single exposure using deep CNNs2017Ingår i: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 36, nr 6, artikel-id 178Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

  • 8.
    Eilertsen, Gabriel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Mantiuk, Rafal
    University of Cambridge.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Real-time noise-aware tone mapping2015Ingår i: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, ISSN 0730-0301, Vol. 34, nr 6, s. 198:1-198:15, artikel-id 198Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Real-time high quality video tone mapping is needed for manyapplications, such as digital viewfinders in cameras, displayalgorithms which adapt to ambient light, in-camera processing,rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a videotone-mapping operator that controls the visibility of the noise,adapts to display and viewing environment, minimizes contrastdistortions, preserves or enhances image details, and can be run inreal-time on an incoming sequence without any preprocessing. To ourknowledge, no existing solution offers all these features. Our novelcontributions are: a fast procedure for computing local display-adaptivetone-curves which minimize contrast distortions, a fast method for detailenhancement free from ringing artifacts, and an integrated videotone-mapping solution combining all the above features.

  • 9.
    Evangelista, Gianpaolo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Physical Model of the Slide Guitar: An Approach Based on Contact Forces2012Ingår i: Proceedings of Audio Engineering Society Convention 132, 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we approach the synthesis of the slide guitar, which is a particular play mode of the guitar where continuous tuning of the tones is achieved by sliding a metal orglass piece, the bottleneck, along thestrings on the guitar neck side. The bottleneck constitues a unilateral constraint for the string vibration.Dynamics is subject to friction, scraping, textured displacement and collisions. The presented model is physically inspired and is based on a dynamic model of friction, together with a geometrical model of the textured displacements and a model for collisions of the string with the bottlenck. These models are suitablefor implementation in a digital waveguide computational scheme for the 3D vibration of the string, where continuous pitch bending is achieved by allpass filters to approximate fractional delays, friction is captured by nonlinear state-space systems in the slide junction and textured displacements by signal injection at avariable point in the waveguide.

  • 10.
    Evangelista, Gianpaolo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Physical Model of the String-Fret Interaction2011Ingår i: Proc. of Digital Audio Effect Conf., 2011, s. 345-351.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, a model for the interaction of the strings with the frets in a guitar or other fretted string instruments is introduced. In the two-polarization representation of the string oscillations it is observed that the string interacts with the fret in different ways. While the vertical oscillation is governed by perfect or imperfect clamping of the string on the fret, the horizontal oscillation is subject to friction of the string over the surface of the fret. The proposed model allows, in particular, for the accurate evaluation of the elongation of the string in the two modes, which gives rise to audible dynamic detuning. The realization of this model into a structurally passive system for use in digital waveguide synthesis is detailed. By changing the friction parameters, the model can be employed in fretless instruments too, where the string directly interacts with the neck surface.

  • 11.
    Evangelista, Gianpaolo
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Smith III, Julius Orion
    CCRMA, Stanford University, California, USA.
    Structurally Passive Scattering Element for Modeling Guitar Pluck Action2010Ingår i: Proc. of Digital Audio Effect Conf., Graz, Austria, 2010, s. 10-17Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose new models for the plucking interaction of the player with the string for use with digital waveguide simulation of guitar. Unlike the previously presented models, the new models are based on structurally passive scattering junctions,which have the main advantage of being properly scaled for use infixed-point waveguide implementations and of guaranteeing stability independently of the plucking excitation. In a first model we start from the Cuzzucoli-Lombardo equa-tions [1], within the Evangelista-Eckerholm [2] propagation formulation, in order to derive the passive scattering junction by means of bilinear transformation. In a second model we start from equations properly modeling the finger compliance by means of a spring. In a third model we formalize the interaction in terms ofdriving impedances. The model is also extended using nonlinear (feathering) compliance models.

  • 12.
    Falk, Martin
    et al.
    VISUS – Visualization Research Center, Universität Stuttgart.
    Seizinger, A.
    VISUS – Visualization Research Center, Universität Stuttgart.
    Sadlo, F.
    VISUS – Visualization Research Center, Universität Stuttgart.
    Üffinger, M.
    VISUS – Visualization Research Center, Universität Stuttgart.
    Weiskopf, D.
    VISUS – Visualization Research Center, Universität Stuttgart.
    Trajectory-Augmented Visualization of Lagrangian Coherent Structures in Unsteady Flow2010Ingår i: International Symposium on Flow Visualization (ISFV14), 2010Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    The finite-time Lyapunov exponent (FTLE) field can be used for many purposes, from the analysis of the predictability in dynamical systems to the topological analysis of timedependent vector fields. In the topological context, the topic of this work, FTLE ridges represent Lagrangian coherent structures (LCS), a counterpart to separatrices in vector field topology. Since the explicit vector field behavior cannot be deduced from these representations, they may be augmented by line integral convolution patterns, a computational flow visualization counterpart to the surface oil flow method. This is, however, strictly meaningful only in stationary vector fields. Here, we propose an augmentation that visualizes the LCS-inducing flow behavior by means of complete trajectories but avoids occlusion and visual clutter. For this we exploit the FTLE for both the selection of significant trajectories as well as their individual representation. This results in 3D line representations for 2D vector fields by treating 2D time-dependent vector fields in 3D space-time. We present two variants of the approach, one easing the choice of the finite advection time for FTLE analysis and one for investigating the flow once the time is chosen.

  • 13.
    Flinke, Johan
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska fakulteten.
    Utvärdering och testning av ett bildbehandlingsprogram för volymberäkning av mat i nutritionsforskning2016Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Vid ett forskningsarbete som bedrivs om barnfetma finns det ett intresse av att säkerställa den mängd mat som deltagarna konsumerar. För att komma runt problematiken med felbedömningar kring det faktiska födointaget har det tagits fram ett program som med hjälp av stereoskopisk bildbehandling ska kunna göra volymberäkningar baserade på fotografier av deltagarnas måltider. På så sätt elimineras den felmarginal som den mänskliga faktorn kan stå för genom att låta en dator ta fram den exakta mängden föda som konsumerats och det ger även en ökad möjlighet att lagra den insamlade datamängden för framtida studier.

    Det här arbetet syftar till att evaluera programmet FoodIQ som tagits fram speciellt för detta ändamål. Även om programmet har levererats och tagits i bruk, så har det ännu inte gjorts några omfattande tester som kan verifiera att programmet klarar av att mäta volymer korrekt. Huvuddelen av det här arbetet har bestått i att genomföra ett stort antal volymberäkningar för att på så sätt kunna få en uppfattning om programmets grad av noggrannhet.

    De slutsatser som kunde dras var att även om programmet visar upp en stor potential så lider det av några allvarliga brister som gör att det i dagsläget inte fungerar som det är tänkt. I den här rapporten kommer det att redogöras en del kring programmets användarvänlighet samt funktionalitet . Det kommer även att presenteras resultatet av de tester som genomfördes där syftet var att försöka ta reda på vilka faktorer som avgör huruvida en volymmätning fungerar som det är tänkt eller inte.   

  • 14.
    Fritz, Jenny
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Utveckling av ett verktyg för produktkataloggenerering2013Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Produktkataloger publiceras och distribueras idag av många detaljhandelsföretag, stora somsmå. Dock har det påvisats att katalogproduktion kan vara både tids- och resurskrävande. Dettaexamensarbete har därför syftat till att finna en lösning på problemet genom att undersökabehov och förutsättningar och därpå utveckla ett verktyg som kan underlätta arbetet med attskapa produktkataloger. Målsättningen var att det resulterande verktyget per automatik skulleframställa en produktkatalog i PDF-format utifrån ett befintligt artikelregister.En förstudie visade att trots olikheter i befintliga produktkatalogers utformning finnsändå vissa gemensamma element såsom produktbild och pris. Detta faktum utnyttjades vidutvecklingsarbetet där det förutsattes att ett artikelregister, oavsett typ av datakälla, alltidinnehåller vissa informationselement som kan publiceras. För att sedan ge utrymme förolikheter i den grafiska utformningen av en katalog implementerades en separat mallhantering.Syftet med detta var att ge användaren möjligheten att justera exempelvis placering av textfält,bilddimensioner och bakgrundsbilder efter eget behov och tycke.För att komma i hamn tilläts projektet att växa i omfattning och under våren 2013 fungeradekataloggenereringsverktyget i enlighet med de mål som satts upp. Trots detta ses fortfarandestora möjligheter för vidareutveckling, särskilt som behovet av effektiviserad katalogproduktiontycks stort.

  • 15.
    Glansberg, Sven
    Linköpings universitet, Institutionen för datavetenskap, Interaktiva och kognitiva system. Linköpings universitet, Tekniska högskolan.
    Presentation av reservdelskatalog med modellbaserat konstruktionsunderlag: En fallstudie av Saabs konceptutveckling för teknikinformation till stridsflygplanet Gripen NG2012Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Inom militär flygindustri är produktdatahantering i produktlivscykeln och utveckling av logistikstöd centrala områden för att hantera krav och kostnader. Senaste strategin för dessa utmaningar grundar sig i modellbaserat konstruktionsunderlag – model-based definition (MBD). I detta perspektiv står disciplinen teknikinformation inför förändringsarbetet att utnyttja möjligheterna med MBD. Förbättringar i presentationsmetod för illustrationer och effektivare arbetsmetoder för produktion förväntas.

    Detta förändringsarbete undersöktes genom en fallstudie av Saabs konceptutveckling för teknikinformation till stridsflygplanet Gripen NG. Studien fokuserade på publikationstypen reservdelskatalog och användningen av den. Arbetet bidrar med en modell som beskriver fyra nivåer för design av informationssystem, varav nivån presentation är i fokus. Därtill undersöks jämförbara arbetsmetoderför hantering av MBD-data inom fallet.

    Studien fann att teknikinformationsavdelningen står inför en övergång från dokumentbaserad förvaltning till utveckling av informationssystem. Därefter diskuteras tre förslag för nästa generations reservdelskatalog. Det mynnar ut i två slutsatser: dels att en bristfällig bild av reservdelskatalogens användning gör det svårt att bedöma nya presentationsmetoders lämplighet, och dels att förbättringarna gjorda i arbetsmetod och presentationsmetod vid MBD-införande på andra områden i produktlivscykeln inte är direkt överförbara på reservdelskatalogen. Till följd av detta presenteras förslag på framtida forskning och arbete.

  • 16.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Context Dependent Color Halftoning in Digital Printing2000Ingår i: IS&Ts PICS Conference 2000, The Society for Imaging Science and Technology , 2000, s. 242-246Konferensbidrag (Övrigt vetenskapligt)
  • 17.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Context Dependent Color Halftoning with Color Matching2001Ingår i: Proceedings of the Technical Association of the Graphic Arts, TAGA, 2001, Technical Association of the Graphic Arts , 2001, s. 304-317Konferensbidrag (Övrigt vetenskapligt)
  • 18.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Halftoning and Color Noise2001Ingår i: Ninth Color Imaging Conference: Color Science and Engineering: Systems, Technologies, and Applications, The Society for Imaging Science and Technology , 2001, s. 148-188Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    A frequency modulated color halftoning algorithm is presented in this paper.Unlike the normal approach of halftoning a color image,in which the color separations of the original image are halftoned independently,the original color image is halftoned in a context dependent manner.The strategy to reduce color noise and gain control over color gamut is to prevent dot-on-dot printing as much as possible. The color shifts that might occur because of this dot-off-dot printing strategy have to be compensated before halftoning. This transformation uses some data for the printer with which the halftoned color image is supposed to be printed. The experiments verify that the color noise is notably smaller in the images that are halftoned by the proposed method compared to the images halftoned using the normal approach of halftoning color images.The method also offers the possibility of treating the color separations of the original image differently if needed.For example,the yellow separation should be treated differently from the other separations, because the yellow dots are less visible than the other color dots when they are printed on a white paper.Two criteria for objectively measuring the quality of the produced results are also discussed.

  • 19.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Hybrid Halftoning in Flexography2003Ingår i: Proceedings of the Technical Association of the Graphic Arts, TAGA, Technical Association of the Graphic Arts , 2003Konferensbidrag (Övrigt vetenskapligt)
  • 20.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hauck, Shahram
    Dept. of Informatics and Media, Beuth Hochschule Berlin University of Applied Sciences, Berlin, Germany,.
    A novel spectral trapping model for color halftones2018Ingår i: Journal of Print and Media Technology Research, ISSN 2223-8905, Vol. 7, nr 3Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The amount of trapping has a great impact on the gray balance and color reproduction of printed products. The conventional trapping models are print density based and give percentage values to estimate the effect of trapping. In an earlier paper (Hauck and Gooran, 2011), a spectral trapping model was proposed, that defines the trapping effect as the DE* ab colorimetric differences between the real ink overlap (measurements) and the ideal ink overlap. All the trapping models proposed so far, however, only calculate the trapping value for full-tone (solid) ink overlap. As the trapping value for full-tone ink overlap could be overestimating the actual ink trapping effect for halftones, it is important to be able to also approximate the trapping value of color halftones. Furthermore, for a detailed gray balance shift analysis, there is a need to estimate the trapping effect for specific color halftones.

    In the present paper, we propose a novel spectral trapping model that delivers the trapping value as DE* ab color difference for color halftones taking into account secondary and tertiary ink overlap.

    The results of the experiments show that the trapping value for color halftones are much smaller than their corresponding trapping value at full-tone, but trapping value of halftones, besides other common quality parameters, should still be considered if some quality inaccuracy, such as gray balance shift, occurs in a print production.

  • 21.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Kruse, Björn
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Color Halftoning in Digital Printing1999Konferensbidrag (Övrigt vetenskapligt)
  • 22.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kruse, Björn
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    High-speed first- and second-order frequency modulated halftoning2015Ingår i: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 24, nr 2Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Halftoning is a crucial part of image reproduction in print. First-order FM halftones, in which the single dots are stochastically distributed, is widely used in printing technologies, such as inkjet, that are able to stably print isolated dispersed dots. Printers, such as laser printers, that utilize electrophotographic technology are not able to stably print the isolated dots and therefore use clustered-dot halftones. Periodic clustered-dot, i.e. AM, halftones are commonly used in this type of printers but they suffer from undesired periodic interference pattern called moiré. An alternative solution is to use second-order FM halftones in which the clustered dots are stochastically distributed. The iterative halftoning techniques, that usually result in well-formed halftones, are operating on the whole input image and require extensive computations and thereby are very slow when the input image is large. In this paper, we introduce a method to generate image independent threshold matrices for first and second-order FM halftoning. The first-order threshold matrix generates well-formed halftone patterns and the second-order FM threshold matrix can be adjusted to produce clustered-dots of different size, shape and alignment. Using predetermined and image independent threshold matrices makes the proposed halftoning method a point-by-point process and thereby very fast.

  • 23.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Kruse, Björn
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Near-optimal model-based halftoning technique with dot gain1998Ingår i: SPIE Volume 3308 - Very High Resolution and Quality Imaging III, SPIE - International Society for Optical Engineering, 1998Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    We present a novel halftoning technique for transformation of continuous tone images into binary halftoned separations. The algorithm is based on a successive assessment of the near optimum sequence of positions to render. The impact of each rendered point is fed back to the process as a distribution function thereby influencing the following evaluations. The distribution function is not constant over the density range. In order to be able to separate the dots adequately in the highlights the 'width or radius' of the distribution has to be made larger than in the mid-tones. The human visual system and the effect of dot gain are also taken into account in this algorithm. The notion of incremental dot gain is introduced. Since the series of positions to render are not known in advance the final necessary dot gain compensation is impossible to assess. However the incremental dot gain can be computed in advance for each configuration of dots and taken into account in the process of generating the output. Some aspects of the process have certain resemblance with error distribution based algorithms. However the raster scanning sequence of rendering the output points in usual error diffusion algorithms is completely different from the image dependent traversal described in this paper.

  • 24.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Yang, Li
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Basics of tone reproduction2015Ingår i: Handbook of Digital Imaging / [ed] Michael Kriss, Wiley , 2015Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

     There is no doubt that printing has been one of the most important technological inventions for

    human civilization. Books, magazines, news papers, and so on have been printed for different

    purposes such as distributing knowledge, thoughts, and news and commercializing products.

    Tone reproduction for images has been one of the challenging parts of the printing technology

    because the printing devices are restricted to a few color inks, whereas the original image

    may consist of millions of color tones. In this chapter, the basics of the tone reproduction

    are introduced. We begin with a brief history of halftoning and a short introduction of digitalization.

    It is followed by the description on visual acuity of human visual system and its

    relationship with the screen resolution. Then the basic and general concepts of tone reproduction,

    such as screen frequency, print resolution, screen angle and Moiré pattern, and dot gain

    are described and illustrated. Dot gain is only briefl y described and illustrated in this chapter as

    it is thoroughly discussed in Physical Evaluation of the Quality of Color Halftone . Finally,

    technologies for color reproduction and color halftoning are discussed.

  • 25.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Yang, Li
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Frequency Modulated Halftoning and Dot Gain2004Ingår i: Proceedings of the Technical Association of the Graphic Arts, TAGA, Technical Association of the Graphic Arts , 2004Konferensbidrag (Övrigt vetenskapligt)
  • 26.
    Gooran, Sasan
    et al.
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Österberg, Mats
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Kruse, Björn
    Linköpings universitet, Institutionen för systemteknik. Linköpings universitet, Tekniska högskolan.
    Hybrid Halftoning: A novel Algorithm for Using Multiple Halftoning Technologies1996Konferensbidrag (Refereegranskat)
    Abstract [en]

    The rendering quality in halftoning is a critical issue. The quality aspects are more important in some images than in others. The quality of skin tone rendering of halftoned images generated by frequency modulated (FM) halftoning techniques differs from those generated by conventional halftoning techniques. Some judge the conventional halftoning techniques as superior in smoothly varying tones whereas frequency modulated halftoning techniques excel in heavily textured images. This paper describes an algorithm that can incorporate both technologies simultaneously. The technique is an iterative optimization of the binary halftone image with respect to the differences between the original and the halftoned images. The performance of the algorithm can be controlled by the nature of the original state of the iteration. The algorithm can in effect accommodate any type of halftone that can be described by a threshold matrix.

  • 27.
    Guo, Jun
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Further development of shaders for realistic materials and global illumination effects2012Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Shader programming is important for realistic material and global illumination real-time rendering, especially in 3D industrial fields nowadays, more and more customers of Visual Components Oy, a Finnish 3D software company have been found to be no longer only content with the correct simulation result, but also the result of realistic real-time rendering. This thesis project will provide a deep research on real world material classification, property definition and global illumination techniques in industrial fields. On the other hand, the Shader program for different materials and global illumination techniques are also created according to the classification and definition in this thesis work. Moreover, an external rendering tool Redway3D is evaluated as the reference and regarded as the considerable solution in the future development work.

  • 28.
    Gustafsson Coppel, Ludovic
    et al.
    Gjøvik University College, Norway.
    Le Moan, Steven
    Technical University of Darmstadt, Germany.
    Zitinski Elias, Paula
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Slavuj, Radovan
    Gjøvik University College, Norway.
    Hardeberg, Jon Yngve
    Gjøvik University College, Norway.
    Next generation printing - Towards spectral proofing2014Konferensbidrag (Övrigt vetenskapligt)
  • 29.
    Günther, David
    et al.
    Saarbrücken, Germany.
    Reininghaus, Jan
    Zuse Institute Berlin, Germany.
    Wagner, Huber
    Lojasiewicza 6, Krakow, Poland .
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Efficient Computation of 3D Morse-Smale Complexes and Persistent Homology using Discrete Morse Theory2012Ingår i: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 28, nr 10, s. 959-969Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose an efficient algorithm that computes the Morse–Smale complex for 3D gray-scale images. This complex allows for an efficient computation of persistent homology since it is, in general, much smaller than the input data but still contains all necessary information. Our method improves a recently proposed algorithm to extract the Morse–Smale complex in terms of memory consumption and running time. It also allows for a parallel computation of the complex. The computational complexity of the Morse–Smale complex extraction solely depends on the topological complexity of the input data. The persistence is then computed using the Morse–Smale complex by applying an existing algorithm with a good practical running time. We demonstrate that our method allows for the computation of persistent homology for large data on commodity hardware.

  • 30.
    Haake, Magnus
    et al.
    Department of Design Sciences, LTH, Lund University, Lund, Swede.
    Gulz, Agneta
    LUCS (Div. of Cognitive Science), Lund University, Kungshuset, Lundagård, Lund, Sweden.
    Visual Stereotypes and Virtual Pedagogical Agents2008Ingår i: Journal of Educational Technology & Society, ISSN 1176-3647, E-ISSN 1436-4522, Vol. 11, nr 4, s. 1-15Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The paper deals with the use of visual stereotypes in virtual pedagogical agents and its potential impact in digital learning environments. An analysis of the concept of visual stereotypes is followed by a discussion of affordances and drawbacks as to their use in the context of traditional media. Next, the paper explores whether virtual pedagogical characters introduce anything novel with regard to the use of visual stereotypes - as compared both to real life interaction between humans and to the use of visual stereotypes in traditional non-interactive media such as magazines, film, television and video. It is proposed that novel affordances, as well as novel drawbacks, indeed are being introduced with the use of visual stereotypes in virtual characters. The conclusion of the paper is that knowledge on these matters can be useful both for developers of educational systems and for educators in enabling them to strengthen some pedagogical settings and activities.

  • 31.
    Hadwiger, Markus
    et al.
    VRVis Research Center, Vienna, Austria.
    Ljung, Patric
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Siemens Corporate Research, Princeton, USA.
    Rezk Salama, Christof
    University of Siegen, Germany.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. University of M¨unster, Germany.
    Advanced illumination techniques for GPU volume raycasting2008Ingår i: ACM Siggraph Asia 2008 Courses, 2008, s. 1-11Konferensbidrag (Refereegranskat)
    Abstract [en]

    Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.

    The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.

  • 32.
    Hadwiger, Markus
    et al.
    VRVis Research Center, Vienna, Austria.
    Ljung, Patric
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Siemens Corporate Research, Princeton, USA.
    Rezk-Salama, Christof
    University of Siegen, Germany.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. University of Münster, Germany.
    Advanced Illumination Techniques for GPU-Based Volume Raycasting2009Övrigt (Övrigt vetenskapligt)
    Abstract [en]

    Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.

    The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering. With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.

  • 33.
    Hagen, Hans
    et al.
    University of Kaiserslautern.
    Hotz, Ingrid
    University of Kaiserslautern.
    Variational modeling methods for Visualization2004Ingår i: Visualization Handbook / [ed] Charles D. Hansen and Chris R. Johnson, Springer, 2004, s. 381-392Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Publisher Summary Variational modeling techniques are powerful tools for free-form modeling in CAD/CAM applications. Some of the basic principles are carrying over to scientific visualization. Others have to be modified and some totally new methods have been developed over the past couple of years. This chapter gives an extended survey of this area. Surfaces and solids designed in a computer graphics environment have many applications in modeling, animation, and visualization. The chapter concentrates on the visualization part. The chapter starts with the basics from differential geometry, which are essential for any variational method. Then, it surveys on variational surface modeling. The last step is the visualization part of geometric modeling. In this context, surface curves like geodesies and curvature lines play an important role. The corresponding differential equations are nonlinear, and in most cases numerical algorithms must be used. To be sure to visualize features at a high quality, algorithms with an inherent quality control are needed. The chapter presents the geometric algorithms, which satisfy this demand.

  • 34.
    Hauck, Shahram
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    A Networked Workflow for a Fully Automated CtP Calibration System2011Ingår i: Proc. International Circle of Educational Institutes for Graphic Arts (IC), 2011Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    One of the most important targets on the graphic market is to realize standardization. The standardization defines the target value of solid Lab and dot gain in printing process. In addition the tolerance range of these two values will be described by standardization. The correct dot gain will be achieved during the measuring of the dot gain in printing process by the additional calculation of a correction curve well known as Print Characteristic Curve (PCC) [1]. The Raster Image Processor (RIP) needs the PCC for the imaging of printing plate with the correct tone value. In this paper we will propose a Networked Workflow (figure 1) with the Workflow Control System (or alternatively a MIS Management Information System). This Networked Workflow is necessary for the realization of a Fully Automated CtP Calibration System.

  • 35.
    Hauck, Shahram
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Gooran, Sasan
    An alternative method to determinate register variation using colorimetry or densitometry tools2011Ingår i: Proceedings of the Technical Association of the Graphic Arts, TAGA, Technical Association of the Graphic Arts , 2011, s. 340-353Konferensbidrag (Övrigt vetenskapligt)
  • 36.
    Hauck, Shahram
    et al.
    Dept. of Informatics and Media, Beuth Hochschule Berlin University of Applied Sciences, Berlin, Germany.
    Gooran, Sasan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Automated CtP calibration system in an offset printing workflow2018Ingår i: Journal of Print and Media Technology Research, ISSN 2223-8905, Vol. 7, nr 3Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Although offset printing has been and still is the most common printing technology for color print productions, its print productions are subject to variations due to environmental and process parameters. Therefore, it is very important to frequently control the print production quality criteria in order to make the process predictable, reproducible and stable. One of the most important parts in a modern industrial offset printing is Computer to Plate (CtP), which makes the printing plate.

    One of the most important quality criteria for printing is to control the dot gain level. It is crucial to have the dot gain level within an acceptable range, defined by ISO 12647-2/13. This is done by dot gain compensation methods in the Raster Image Processor (RIP). Dot gain compensation, which is also referred to as CtP calibration, is however a complicated task in offset printing because of the huge number of parameters affecting dot gain. The conventional CtP calibration methods for an offset printing process, which are very time and resource demanding and hence expensive, mostly uses one to five dot gain correction curves as maximum. The proposed CtP calibration method in this paper, calibrates the dot gain according to ISO 12647-2/13 recommendations fully automatically parallel to the print production.

    Besides that, there is no limitation of the number of the needed dot gain correction curves. This method, which is much more efficient and economically beneficial compared to conventional CtP calibration methods, also makes the printing production very accurate in terms of dot gain value. This automated CtP calibration system for offset printing workflow is introduced and described in this paper.

  • 37.
    Havsvik, Oskar
    Linköpings universitet, Institutionen för datavetenskap. Linköpings universitet, Tekniska fakulteten.
    Enhanced Full-body Motion Detection for Web Based Games using WebGL2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    By applying the image processing algorithms used in surveillance systems on video data obtained from a web camera, a motion detection application can be created and incorporated into web based games. The use of motion detection opens up a vast field of new possibilities in game design and this thesis will therefore cover how to create a motion detection JavaScript module which can be used in web based games.

    The performance and quality of the motion detection algorithms are important to consider when creating an application. What motion detection algorithms can be used to give a qualitativerepresentation without affecting the performance of a web based game will be analyzed and implemented in this thesis. Since the performance of the Central Processing Unit will not suffice, WebGL and the parallelism of the Graphical Processing Unit will be utilized to implement some of the most recognized image processing algorithms used in motion detection systems. The work resulted in an application where Gaussian blur and Frame Subtraction were used to detect and return areas where motion has been detected.

  • 38.
    Johansson Fernstad, Sara
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Johansson, Jimmy
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    A Task Based Performance Evaluation of Visualization Approaches for Categorical Data Analysis.2011Ingår i: Proceedings - 15th International Conferenceon Information Visualisation, Los Alamitos, CA, USA: IEEE Computer Society, 2011, s. 80-89Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Categorical data is common within many areas and efficient methods for analysis are needed. It is, however, often difficult to analyse categorical data since no general measure of similarity exists. One approach is to represent the categories with numerical values (quantification) prior to visualization using methods for numerical data. Another is to use visual representations specifically designed for categorical data. Although commonly used, very little guidance is available as to which method may be most useful for different analysis tasks. This paper presents an evaluation comparing the performance of employing quantification prior to visualization and visualization using a method designed for categorical data. It also provides a guidance as to which visualization approach is most useful in the context of two basic data analysis tasks: one related to similarity structures and one related to category frequency. The results strongly indicate that the quantification approach is most efficient for the similarity related task, whereas the visual representation designed for categorical data is most efficient for the task related to category frequency.

  • 39.
    Johansson Fernstad, Sara
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Johansson, Jimmy
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Adams, Suzi
    Unilever Discover Port Sunlight, UK.
    Shaw, Jane
    Unilever Discover Port Sunlight, UK.
    Taylor, David
    Unilever Discover Port Sunlight, UK.
    Visual Exploration of Microbial Populations2011Ingår i: IEEE Symposium on Biological Data Visualization, 2011, s. 127-134Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Studies of the ecology of microbial populations are increasingly common within many research areas as the field of microbiomics develops rapidly. The study of the ecology in sampled microbial populations generates high dimensional data sets. Although many analysis methods are available for examination of such data, a tailored tool was required to fulfill the need of interactivity and flexibility for microbiologists. In this paper, MicrobiVis is presented. It is a tool for visual exploration and interactive analysis of microbiomic populations. MicrobiVis has been designed in close collaboration with end users. It extends previous interactive systems for explorative dimensionality reduction by including a range of domain relevant features. It contributes a flexible and explorative dimensionality reduction as well as a visual and interactive environment for examination of data subsets. By combining information visualization and methods based on analytic tasks common in microbiology as a means for gaining new and relevant insights. The utility of MicrobiVis is demonstrated through a use case describinghow a microbiologist may use the system for a visual analysis of amicrobial data set. Its usability and potential is indicated throughpositive feedback from the current end users.

  • 40.
    Johansson, Sara
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Visual Exploration of Categorical and Mixed Data Sets2009Ingår i: Proceeding VAKD '09 Proceedings of the ACM SIGKDD Workshop on Visual Analytics and Knowledge Discovery: Integrating Automated Analysis with Interactive Exploration: Workshop on Visual Analytics and Knowledge Discovery, New York, USA: ACM Press, 2009, s. 21-29Konferensbidrag (Refereegranskat)
    Abstract [en]

    For categorical data there does not exist any similarity measurewhich is as straight forward and general as the numericaldistance between numerical items. Due to this it is often difficultto analyse data sets including categorical variables or a combination of categorical and numerical variables (mixeddata sets). Quantification of categorical variables enablesanalysis using commonly used visual representations andanalysis techniques for numerical data. This paper presents a tool for exploratory analysis of categorical and mixed data, which uses a quantification process introduced in [Johansson2008]. The application enables analysis of mixed data sets by providingan environment for exploratory analysis using commonvisual representations in multiple coordinated views and algorithmic analysis that facilitates detection of potentially interesting patterns within combinations of categorical and numerical variables. The effectiveness of the quantificationprocess and of the features of the application is demonstratedthrough a case scenario.

  • 41.
    Johansson, Sara
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Jern, Mikael
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Johansson, Jimmy
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Interactive Quantification of Categorical Variables in Mixed Data Sets2008Ingår i: Information Visualisation, 2008. IV '08. 12th International Conference / [ed] Ebad Banissi, Liz Stuart, Mikael Jern, Gennady Andrienko, Francis T. Marchese, Nasrullah Memon, Reda Alhajj, Theodor G Wyeld, Remo Aslak Burkhard, Georges Grinstein, Dennis Groth, Anna Ursyn, Carsten Maple, Anthony Faiola and Brock Craft, Los Alamitos, California: IEEE Computer Society, 2008, s. 3-10Konferensbidrag (Refereegranskat)
    Abstract [en]

    Data sets containing a combination of categorical and continuous variables (mixed data sets) are difficult to analyse since no generalized similarity measure exists for categorical variables. Quantification of categorical variables makes it possible to represent this type of data using techniques designed for numerical data. This paper presents a quantification process of categorical variables in mixed data sets that incorporates information on relationships among the continuous variables into the process, as well as utilizing the domain knowledge of a user. An interactive visualization environment using parallel coordinates as a visual interface is provided, where the user is able to control the quantification process and analyse the result. The efficiency of the approach is demonstrated using two mixed data sets.

  • 42.
    Johansson, Sara
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Johansson, Jimmy
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics2009Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 15, nr 6, s. 993-1000Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.

  • 43.
    Johansson, Sara
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Johansson, Jimmy
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Visual Analysis of Mixed Data Sets Using Interactive Quantification2009Ingår i: ACM SIGKDD Explorations Newsletter, ISSN 1931-0145, Vol. 11, nr 2, s. 29-38Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    It is often diffcult to analyse data sets including a combi-nation of categorical and numerical variables (mixed datasets) since there does not exist any similarity measure whichis as straight forward and general as the numerical distancebetween numerical items. Quantication of categorical vari-ables enables analysis using commonly used visual represen-tations and analysis techniques for numerical data. Thispaper presents a tool for exploratory analysis of categoricaland mixed data which uses a quantication process intro-duced in [Johansson2008]. The application enables analysis of mixeddata sets by providing an environment for exploratory anal-ysis using common visual representations in multiple coordi-nated views and algorithmic analysis that facilitates detec-tion of potentially interesting patterns within combinationsof categorical and numerical variables. The generality andusefulness of the quantication process and of the featuresof the application is demonstrated through a case scenariousing a data set from the IEEE VAST 2008 Challenge.

  • 44.
    Johansson-Evegård, Erik
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Artist Friendly Fracture Modelling2012Självständigt arbete på avancerad nivå (yrkesexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Destruction is one of the key aspects of visual effects. This report describes the work that was done to create a production ready pre-fracture modelling plug-in for Maya. It provides information on what methods that can be used to create a robust plug-in and various techniques for sampling points to create interesting fracture patterns using the Voronoi diagram. It also discusses how this work can be further built on to create an even better plug-in.

  • 45.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Enhancing Salient Features in Volumetric Data Using Illumination and Transfer Functions2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The visualization of volume data is a fundamental component in the medical domain. Volume data is used in the clinical work-flow to diagnose patients and is therefore of uttermost importance. The amount of data is rapidly increasing as sensors, such as computed tomography scanners, become capable of measuring more details and gathering more data over time. Unfortunately, the increasing amount of data makes it computationally challenging to interactively apply high quality methods to increase shape and depth perception. Furthermore, methods for exploring volume data has mostly been designed for experts, which prohibits novice users from exploring volume data. This thesis aims to address these challenges by introducing efficient methods for enhancing salient features through high quality illumination as well as methods for intuitive volume data exploration.

    Humans are interpreting the world around them by observing how light interacts with objects. Shadows enable us to better determine distances while shifts in color enable us to better distinguish objects and identify their shape. These concepts are also applicable to computer generated content. The perception in volume data visualization can therefore be improved by simulating real-world light interaction. This thesis presents efficient methods that are capable of interactively simulating realistic light propagation in volume data. In particular, this work shows how a multi-resolution grid can be used to encode the attenuation of light from all directions using spherical harmonics and thereby enable advanced interactive dynamic light configurations. Two methods are also presented that allow photon mapping calculations to be focused on visually changing areas.The results demonstrate that photon mapping can be used in interactive volume visualization for both static and time-varying volume data.

    Efficient and intuitive exploration of volume data requires methods that are easy to use and reflect the objects that were measured. A value that has been collected by a sensor commonly represents the material existing within a small neighborhood around a location. Recreating the original materials is difficult since the value represents a mixture of them. This is referred to as the partial-volume problem. A method is presented that derives knowledge from the user in order to reconstruct the original materials in a way which is more in line with what the user would expect. Sharp boundaries are visualized where the certainty is high while uncertain areas are visualized with fuzzy boundaries. The volume exploration process of mapping data values to optical properties through the transfer function has traditionally been complex and performed by expert users. A study at a science center showed that visitors favor the presented dynamic gallery method compared to the most commonly used transfer function editor.

    Delarbeten
    1. A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
    Öppna denna publikation i ny flik eller fönster >>A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering
    2014 (Engelska)Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 33, nr 1, s. 27-51Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

    Ort, förlag, år, upplaga, sidor
    Wiley, 2014
    Nyckelord
    volume rendering; rendering; volume visualization; visualization; illumination rendering; rendering
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-105757 (URN)10.1111/cgf.12252 (DOI)000331694100004 ()
    Tillgänglig från: 2014-04-07 Skapad: 2014-04-04 Senast uppdaterad: 2017-12-05
    2. Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -
    Öppna denna publikation i ny flik eller fönster >>Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -
    Visa övriga...
    2012 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 3, s. 447-462Artikel i tidskrift (Refereegranskat) Published
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2012
    Nyckelord
    Volumetric Illumination, Precomputed Radiance Transfer, Volume Rendering
    Nationell ämneskategori
    Annan data- och informationsvetenskap
    Identifikatorer
    urn:nbn:se:liu:diva-66839 (URN)10.1109/TVCG.2011.35 (DOI)000299281700010 ()
    Projekt
    CADICSMOVIII
    Anmärkning
    ©2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Joel Kronander, Daniel Jönsson, Joakim Löw, Patric Ljung, Anders Ynnerman and Jonas Unger, Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering, 2011, IEEE Transactions on Visualization and Computer Graphics. http://dx.doi.org/10.1109/TVCG.2011.35 Tillgänglig från: 2011-03-24 Skapad: 2011-03-21 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
    3. Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
    Öppna denna publikation i ny flik eller fönster >>Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
    2012 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 12, s. 2364-2371Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

    Ort, förlag, år, upplaga, sidor
    Institute of Electrical and Electronics Engineers (IEEE), 2012
    Nyckelord
    Volume rendering, photon mapping, global illumination, participating media
    Nationell ämneskategori
    Teknik och teknologier
    Identifikatorer
    urn:nbn:se:liu:diva-86634 (URN)10.1109/TVCG.2012.232 (DOI)000310143100040 ()
    Projekt
    CADICSCMIV
    Anmärkning

    Funding Agencies|Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Swedish e-Science Research Centre (SeRC)||

    Tillgänglig från: 2012-12-20 Skapad: 2012-12-20 Senast uppdaterad: 2017-12-06
    4. Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
    Öppna denna publikation i ny flik eller fönster >>Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
    2017 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 23, nr 1, s. 901-910Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

    Ort, förlag, år, upplaga, sidor
    Institute of Electrical and Electronics Engineers (IEEE), 2017
    Nyckelord
    Volume rendering, photon mapping, global illumination, participating media
    Nationell ämneskategori
    Mediateknik
    Identifikatorer
    urn:nbn:se:liu:diva-131022 (URN)10.1109/TVCG.2016.2598430 (DOI)000395537600093 ()27514045 (PubMedID)2-s2.0-84999158356 (Scopus ID)
    Projekt
    SERCCMIV
    Anmärkning

    Funding Agencies|Swedish e-Science Research Centre (SeRC)||Swedish Research Council (VR) grant 2016-05462||Knut and Alice Wallenberg Foundation (KAW) grant 2016-0076||

    Tillgänglig från: 2016-09-05 Skapad: 2016-09-05 Senast uppdaterad: 2017-04-20Bibliografiskt granskad
    5. Boundary Aware Reconstruction of Scalar Fields
    Öppna denna publikation i ny flik eller fönster >>Boundary Aware Reconstruction of Scalar Fields
    2014 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 20, nr 12, s. 2447-2455Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In visualization, the combined role of data reconstruction and its classification plays a crucial role. In this paper we propose a novel approach that improves classification of different materials and their boundaries by combining information from the classifiers at the reconstruction stage. Our approach estimates the targeted materials’ local support before performing multiple material-specific reconstructions that prevent much of the misclassification traditionally associated with transitional regions and transfer function (TF) design. With respect to previously published methods our approach offers a number of improvements and advantages. For one, it does not rely on TFs acting on derivative expressions, therefore it is less sensitive to noisy data and the classification of a single material does not depend on specialized TF widgets or specifying regions in a multidimensional TF. Additionally, improved classification is attained without increasing TF dimensionality, which promotes scalability to multivariate data. These aspects are also key in maintaining low interaction complexity. The results are simple-to-achieve visualizations that better comply with the user’s understanding of discrete features within the studied object.

    Ort, förlag, år, upplaga, sidor
    IEEE Press, 2014
    Nationell ämneskategori
    Data- och informationsvetenskap Datavetenskap (datalogi)
    Identifikatorer
    urn:nbn:se:liu:diva-110227 (URN)10.1109/TVCG.2014.2346351 (DOI)000344991700090 ()
    Tillgänglig från: 2014-09-04 Skapad: 2014-09-04 Senast uppdaterad: 2018-01-11Bibliografiskt granskad
    6. Intuitive Exploration of Volumetric Data Using Dynamic Galleries
    Öppna denna publikation i ny flik eller fönster >>Intuitive Exploration of Volumetric Data Using Dynamic Galleries
    2016 (Engelska)Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 22, nr 1, s. 896-905Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.

    Ort, förlag, år, upplaga, sidor
    IEEE COMPUTER SOC, 2016
    Nyckelord
    Transfer function; scalar fields; volume rendering; touch interaction; visualization; user interfaces
    Nationell ämneskategori
    Elektroteknik och elektronik
    Identifikatorer
    urn:nbn:se:liu:diva-123054 (URN)10.1109/TVCG.2015.2467294 (DOI)000364043400095 ()26390481 (PubMedID)
    Anmärkning

    Funding Agencies|Swedish Research Council, VR [2011-5816]; Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Linnaeus Environment CADICS; Swedish e-Science Research Centre (SeRC)

    Tillgänglig från: 2015-12-04 Skapad: 2015-12-03 Senast uppdaterad: 2017-12-01
  • 46.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Anders, Ynnerman
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data2017Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 23, nr 1, s. 901-910Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

  • 47.
    Kratz, Andrea
    et al.
    Zuse Institute Berlin, Germany.
    Baum, Daniel
    Zuse Institute Berlin, Germany.
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Anisotropic Sampling of Planar and Two-Manifold Domains for Texture Generation and Glyph Distribution2013Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 19, nr 11, s. 1782-1794Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a new method for the generation of anisotropic sample distributions on planar and two-manifold domains. Most previous work that is concerned with aperiodic point distributions is designed for isotropically shaped samples. Methods focusing on anisotropic sample distributions are rare, and either they are restricted to planar domains, are highly sensitive to the choice of parameters, or they are computationally expensive. In this paper, we present a time-efficient approach for the generation of anisotropic sample distributions that only depends on intuitive design parameters for planar and two-manifold domains. We employ an anisotropic triangulation that serves as basis for the creation of an initial sample distribution as well as for a gravitational-centered relaxation. Furthermore, we present an approach for interactive rendering of anisotropic Voronoi cells as base element for texture generation. It represents a novel and flexible visualization approach to depict metric tensor fields that can be derived from general tensor fields as well as scalar or vector fields.

  • 48.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bonnet, Gerhard
    AG Spheron VR, Germany.
    Ynnerman, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    A unified framework for multi-sensor HDR video reconstruction2014Ingår i: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, nr 2, s. 203-215Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

  • 49.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Real-time HDR video reconstruction for multi-sensor systems2012Ingår i: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, s. 65-Konferensbidrag (Refereegranskat)
    Abstract [en]

    HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.

  • 50.
    Lindskog, Eric
    et al.
    Linköpings universitet, Institutionen för datavetenskap.
    Jesper, Wrang
    Linköpings universitet, Institutionen för datavetenskap.
    Design of video players for branched videos2018Självständigt arbete på grundnivå (kandidatexamen), 10,5 poäng / 16 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Interactive branched video allows users to make viewing decisions while watching, that affect the playback path of the video and potentially the outcome of the story. This type of video introduces new challenges in terms of design, for example displaying the playback progress, the structure of the branched video as well as the choices that the viewers can make. In this thesis we test three implementations of working video players with different types of playback bars: one fully viewed with no moving parts, one that zooms into the currently watched section of the video, and one that leverages a fisheye distortion. A number of usability tests are carried out using surveys complemented with observations made during the tests. Based on these user tests we concluded that the implementation with a zoomed in playback bar was the easiest to understand and that fisheye effect received mixed results, ranging from distracting and annoying to interesting and clear. With this feedback a new set of implementations was created and solutions for each component of the video player were identified. These new implementations support more general solutions for the shape of the branch segments and the position and location of the choices for upcoming branches. The new implementations have not gone through any testing, but we expect that future work can further explore this subject with the help of our code and suggestions.

12 1 - 50 av 94
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf