liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1 - 29 av 29
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Eilertsen, Gabriel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska fakulteten.
    Denes, Gyorgy
    University of Cambridge, England.
    Mantiuk, Rafal K.
    University of Cambridge, England.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    HDR image reconstruction from a single exposure using deep CNNs2017Ingår i: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 36, nr 6, artikel-id 178Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

  • 2.
    Hajisharif, Saghi
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Miandji, Ehsan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Real-time image based lighting with streaming HDR-lightprobe sequences2012Ingår i: Proceedings of SIGRAD 2012 / [ed] Andreas Kerren, Stefan Seipel, Linköping, Sweden, 2012Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    We present a framework for shading of virtual objects using high dynamic range (HDR) light probe sequencesin real-time. Such images (light probes) are captured using a high resolution HDR camera. In each frame ofthe HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiancecalculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporalcoherence between frames to further smooth lighting variation over time. Our results show that the frameworkcan achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in thereal environment.

  • 3.
    Hajisharif, Saghi
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Adaptive dualISO HDR-reconstruction2015Ingår i: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

  • 4.
    Hajsharif, Saghi
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    HDR reconstruction for alternating gain (ISO) sensor readout2014Ingår i: Eurographics 2014 short papers, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

  • 5.
    Jönsson, Daniel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ropinski, Timo
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping2012Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 12, s. 2364-2371Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

  • 6.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Physically Based Rendering of Synthetic Objects in Real Environments2015Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [sv]

    En av de största utmaningarna inom datorgrafik är att syntetisera, eller rendera, fotorealistiska bilder. Fotorealistisk rendering används idag inom många tillämpningsområden såsom specialeffekter i film, datorspel, produktvisualisering och virtuell verklighet. I många praktiska tillämpningar av fotorealistisk rendering är det viktigt att kunna placera in virtuella objekt i fotografier, så att de virtuella objekten ser verkliga ut. IKEA-katalogen, till exempel, produceras i många olika versioner för att passa olika länder och regioner. Grunden till de flesta bilderna i katalogen är oftast densamma, men symboler och standardmått på möbler varierar ofta för olika versioner av katalogen. Istället för att fotografera varje version separat kan man använda ett grundfotografi och lägga in olika virtuella objekt såsom möbler i fotot. Genom att på det här sättet möblera ett rum virtuellt, istället för på riktigt, kan man också snabbt testa olika möbleringar och därmed göra ekonomiska besparingar.

    Den här avhandlingen bidrar med metoder och algoritmer för att rendera fotorealistiska bilder av virtuella objekt som kan blandas med verkliga fotografier. För att rendera sådana bilder används fysikaliskt baserade simuleringar av hur ljus interagerar med virtuella och verkliga objekt i motivet. För fotorealistiska resultat kräver simuleringarna noggrann modellering av objektens geometri, belysning och materialegenskaper, såsom färg, textur och reflektans.

    För att de virtuella objekten ska se verkliga ut är det viktigt att belysa dem med samma ljus som de skulle ha haft om de var en del av den verkliga miljön. Därför är det viktigt att noggrant mäta och modellera ljusförhållanden på de platser i scenen där de virtuella objekten ska placeras. För detta använder vi High Dynamic Range-fotografi, eller HDR. Med hjälp av HDR-fotografi kan vi noggrant mäta hela omfånget av det infallande ljuset i en punkt, från mörka skuggor till direkta ljuskällor. Detta är inte möjligt med traditionella digitalkameror, då det dynamiska omfånget hos vanliga kamerasensorer är begränsat. Avhandlingen beskriver nya metoder för att rekonstruera HDR-bilder som ger mindre brus och artefakter än tidigare metoder. Vi presenterar också metoder för att rendera virtuella objekt som rör sig mellan regioner med olika belysning, eller där belysningen varierar i tiden. Metoder för att representera spatiellt varierande belysning på ett kompakt sätt presenteras också. För att noggrant beskriva hur glansiga ytor sprider eller reflekterar ljus, beskrivs också två nya parametriska modeller som är mer verklighetstrogna än tidigare reflektionsmodeller. I avhandlingen presenteras också en ny metod för effektiv rendering av motiv som är mycket beräkningskrävande, till exempel scener med uppmätta belysningsförhållanden, komplicerade  material, och volumetriska modeller som rök, moln, textiler, biologisk vävnad och vätskor. Metoden bygger på en typ av så kallade Markov Chain Monte Carlo metoder för att simulera ljustransporten i scenen, och är inspirerad av nyligen presenterade resultat inom matematisk statistik.

    Metoderna som beskrivs i avhandlingen presenteras i kontexten av fotorealistisk rendering av virtuella objekt i riktiga miljöer, då majoriteten av forskningen utförts inom detta område. Flera av de metoder som presenteras i denna avhandling är dock tillämpbara inom andra domäner, såsom fysiksimulering, datorseende och vetenskaplig visualisering.

    Delarbeten
    1. Photorealistic rendering of mixed reality scenes
    Öppna denna publikation i ny flik eller fönster >>Photorealistic rendering of mixed reality scenes
    Visa övriga...
    2015 (Engelska)Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, nr 2, s. 643-665Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

    Ort, förlag, år, upplaga, sidor
    Wiley-Blackwell, 2015
    Nyckelord
    Picture/Image Generation—Illumination Estimation, Image-Based Lighting, Reflectance and Shading
    Nationell ämneskategori
    Signalbehandling
    Identifikatorer
    urn:nbn:se:liu:diva-118542 (URN)10.1111/cgf.12591 (DOI)000358326600060 ()
    Konferens
    The 36th Annual Conference of the European Association of Computer Graphics, Eurographics 2015, Zürich, Switzerland, 4th–8th May 2015
    Projekt
    VPS
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), IIS11-0081Linnaeus research environment CADICS
    Tillgänglig från: 2015-05-31 Skapad: 2015-05-31 Senast uppdaterad: 2017-12-04Bibliografiskt granskad
    2. Pseudo-Marginal Metropolis Light Transport
    Öppna denna publikation i ny flik eller fönster >>Pseudo-Marginal Metropolis Light Transport
    2015 (Engelska)Ingår i: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, s. 13:1-13:4Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
    Abstract [en]

    Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

    Ort, förlag, år, upplaga, sidor
    ACM Digital Library, 2015
    Nationell ämneskategori
    Datavetenskap (datalogi) Data- och informationsvetenskap
    Identifikatorer
    urn:nbn:se:liu:diva-122586 (URN)10.1145/2820903.2820922 (DOI)978-1-4503-3930-8 (ISBN)
    Konferens
    The 8th ACM SIGGRAPH Conference and Exhibition, Asia Technical Briefs, 3-5 November, Kobe, Japan
    Tillgänglig från: 2015-11-10 Skapad: 2015-11-10 Senast uppdaterad: 2018-01-10Bibliografiskt granskad
    3. Temporally and Spatially Varying Image Based Lighting using HDR-video
    Öppna denna publikation i ny flik eller fönster >>Temporally and Spatially Varying Image Based Lighting using HDR-video
    Visa övriga...
    2013 (Engelska)Ingår i: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, s. 1-5Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2013
    Nationell ämneskategori
    Elektroteknik och elektronik
    Identifikatorer
    urn:nbn:se:liu:diva-95746 (URN)000341754500314 ()
    Konferens
    21st European Signal Processing Conference (EUSIPCO 2013), 9-13 September 2013, Marrakech, Morocco
    Projekt
    VPS
    Forskningsfinansiär
    VetenskapsrådetStiftelsen för strategisk forskning (SSF), IIS11-0080
    Tillgänglig från: 2013-07-18 Skapad: 2013-07-18 Senast uppdaterad: 2015-11-10Bibliografiskt granskad
    4. Spatially varying image based lighting using HDR-video
    Öppna denna publikation i ny flik eller fönster >>Spatially varying image based lighting using HDR-video
    Visa övriga...
    2013 (Engelska)Ingår i: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, nr 7, s. 923-934Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

    Ort, förlag, år, upplaga, sidor
    Elsevier, 2013
    Nyckelord
    High dynamic range video, HDR-video, image based lighting, photo realistic image synthesis
    Nationell ämneskategori
    Mediateknik Signalbehandling
    Identifikatorer
    urn:nbn:se:liu:diva-96949 (URN)10.1016/j.cag.2013.07.001 (DOI)000325834400015 ()
    Projekt
    VPS
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), IIS11-0081Vetenskapsrådet
    Tillgänglig från: 2013-08-30 Skapad: 2013-08-30 Senast uppdaterad: 2017-12-06Bibliografiskt granskad
    5. Unified HDR reconstruction from raw CFA data
    Öppna denna publikation i ny flik eller fönster >>Unified HDR reconstruction from raw CFA data
    2013 (Engelska)Ingår i: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, s. 1-9Konferensbidrag, Publicerat paper (Refereegranskat)
    Abstract [en]

    HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

    Ort, förlag, år, upplaga, sidor
    IEEE, 2013
    Nationell ämneskategori
    Teknik och teknologier Signalbehandling
    Identifikatorer
    urn:nbn:se:liu:diva-90106 (URN)10.1109/ICCPhot.2013.6528315 (DOI)978-1-4673-6463-8 (ISBN)
    Konferens
    5th IEEE International Conference on Computational Photography, ICCP 2013; Cambridge, MA; United States
    Projekt
    VPS
    Tillgänglig från: 2013-03-19 Skapad: 2013-03-19 Senast uppdaterad: 2015-11-10
    6. A unified framework for multi-sensor HDR video reconstruction
    Öppna denna publikation i ny flik eller fönster >>A unified framework for multi-sensor HDR video reconstruction
    Visa övriga...
    2014 (Engelska)Ingår i: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, nr 2, s. 203-215Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

    Ort, förlag, år, upplaga, sidor
    Elsevier, 2014
    Nyckelord
    HDR video, HDR fusion, Kernel regression, Radiometric calibration
    Nationell ämneskategori
    Mediateknik
    Identifikatorer
    urn:nbn:se:liu:diva-104617 (URN)10.1016/j.image.2013.08.018 (DOI)000332999200003 ()
    Projekt
    VPS
    Forskningsfinansiär
    Stiftelsen för strategisk forskning (SSF), IIS11-0081
    Tillgänglig från: 2014-02-19 Skapad: 2014-02-19 Senast uppdaterad: 2015-11-10Bibliografiskt granskad
    7. Adaptive dualISO HDR-reconstruction
    Öppna denna publikation i ny flik eller fönster >>Adaptive dualISO HDR-reconstruction
    2015 (Engelska)Ingår i: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

    Ort, förlag, år, upplaga, sidor
    Springer Publishing Company, 2015
    Nyckelord
    HDR reconstruction; Single shot HDR imaging; DualISO; Statistical image fitlering
    Nationell ämneskategori
    Datavetenskap (datalogi) Data- och informationsvetenskap
    Identifikatorer
    urn:nbn:se:liu:diva-122587 (URN)10.1186/s13640-015-0095-0 (DOI)000366324500001 ()
    Anmärkning

    Funding agencies: Swedish Foundation for Strategic Research (SSF) [IIS11-0081]; Linkoping University Center for Industrial Information Technology (CENIIT); Swedish Research Council through the Linnaeus Environment CADICS

    Tillgänglig från: 2015-11-10 Skapad: 2015-11-10 Senast uppdaterad: 2018-01-10Bibliografiskt granskad
    8. BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces
    Öppna denna publikation i ny flik eller fönster >>BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces
    2012 (Engelska)Ingår i: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, nr 1Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

    Ort, förlag, år, upplaga, sidor
    Association for Computing Machinery (ACM), 2012
    Nyckelord
    BRDF, gloss, Rayleigh-Rice, global illumination, Monte Carlo, importance sampling
    Nationell ämneskategori
    Datorsystem
    Identifikatorer
    urn:nbn:se:liu:diva-75045 (URN)10.1145/2077341.2077350 (DOI)000300622500009 ()
    Projekt
    CADICSELLIIT
    Anmärkning
    funding agencies|Swedish Foundation for Strategic Research through the Strategic Research Centre MOVIII| A3:05:193 |Swedish Knowledge Foundation| 2009/0091 |Forskning och Framtid| ITN 2009-00116 |Swedish Research Council through the Linnaeus Center for Control, Autonomy, and Decision-making in Complex Systems (CADICS)||Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Tillgänglig från: 2012-02-15 Skapad: 2012-02-15 Senast uppdaterad: 2017-12-07
  • 7.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Banterle, Francesco
    Visual Computing Lab, ISTI-CNR, Italy.
    Gardner, Andrew
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Miandji, Ehsan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Photorealistic rendering of mixed reality scenes2015Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, nr 2, s. 643-665Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

  • 8.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Dahlin, Johan
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kok, Manon
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Schön, Thomas
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan. Uppsala Universitet.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Real-time video based lighting using GPU raytracing2014Ingår i: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE Signal Processing Society, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    The recent introduction of HDR video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment maps sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons.

  • 9.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bonnet, Gerhard
    SpheronVR AG.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unified HDR reconstruction from raw CFA data2013Ingår i: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, s. 1-9Konferensbidrag (Refereegranskat)
    Abstract [en]

    HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

  • 10.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bonnet, Gerhard
    AG Spheron VR, Germany.
    Ynnerman, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    A unified framework for multi-sensor HDR video reconstruction2014Ingår i: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, nr 2, s. 203-215Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

  • 11.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Real-time HDR video reconstruction for multi-sensor systems2012Ingår i: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, s. 65-Konferensbidrag (Refereegranskat)
    Abstract [en]

    HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.

  • 12.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Jönsson, Daniel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Löw, Joakim
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ljung, Patric
    Siemens.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -2012Ingår i: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, nr 3, s. 447-462Artikel i tidskrift (Refereegranskat)
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

  • 13.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Schön, Thomas B.
    Uppsala Universitet, Sweden.
    Robust auxiliary particle filters using multiple importance sampling2014Ingår i: Proceedings of the 2014 IEEE Statistical Signal Processing Workshop, IEEE , 2014, s. 268-271Konferensbidrag (Refereegranskat)
    Abstract [en]

    A poor choice of importance density can have detrimental effect on the efficiency of a particle filter. While a specific choice of proposal distribution might be close to optimal for certain models, it might fail miserably for other models, possibly even leading to infinite variance. In this paper we show how mixture sampling techniques can be used to derive robust and efficient particle filters, that in general performs on par with, or better than, the best of the standard importance densities. We derive several variants of the auxiliary particle filter using both random and deterministic mixture sampling via multiple importance sampling. The resulting robust particle filters are easy to implement and require little parameter tuning.

  • 14.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Schön, Thomas B.
    Uppsala University.
    Dahlin, Johan
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Backward sequential Monte Carlo for marginal smoothing2014Ingår i: Proceedings of the 2014 IEEE Statistical Signal Processing Workshop, IEEE Press, 2014, s. 368-371Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose a new type of particle smoother with linear computational complexity. The smoother is based on running a sequential Monte Carlo sampler backward in time after an initial forward filtering pass. While this introduces dependencies among the backward trajectories we show through simulation studies that the new smoother can outperform existing forward-backward particle smoothers when targeting the marginal smoothing densities.

  • 15.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Schön, Thomas B.
    Division of Systems and Control, Department of Information Technology, Uppsala University.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Pseudo-Marginal Metropolis Light Transport2015Ingår i: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, s. 13:1-13:4Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

  • 16.
    Kronander, Joel
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Moeller, Torsten
    Simon Fraser University, Vancouver.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Visuell informationsteknologi och applikationer. Linköpings universitet, Tekniska högskolan.
    Estimation and Modeling of Actual Numerical Errors in Volume Rendering2010Ingår i: COMPUTER GRAPHICS FORUM, ISSN 0167-7055, Vol. 29, nr 3, s. 893-902Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we study the comprehensive effects on volume rendered images due to numerical errors caused by the use of finite precision for data representation and processing. To estimate actual error behavior we conduct a thorough study using a volume renderer implemented with arbitrary floating-point precision. Based on the experimental data we then model the impact of floating-point pipeline precision, sampling frequency and fixed-point input data quantization on the fidelity of rendered images. We introduce three models, an average model, which does not adapt to different data nor varying transfer functions, as well as two adaptive models that take the intricacies of a new data set and transfer function into account by adapting themselves given a few different images rendered. We also test and validate our models based on new data that was not used during our model building.

  • 17.
    Lindholm, Stefan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Accounting for Uncertainty in Medical Data: A CUDA Implementation of Normalized Convolution2011Ingår i: Evaluations of graphics and visualization - efficiency, usefulness, accessibility, usability., 2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    The domain of medical imaging is naturally moving towards methods that can represent, and account for, localuncertainties in the image data. Even so, fast and efficient solutions that take uncertainty into account are notreadily available even for common problems such as gradient estimation. In this work we present a CUDA imple-mentation of Normalized Convolution, an uncertainty-aware image processing technique, well established in thesignal processing domain. Our results show that up to 100X speedups are possible, which enables full resolutionCT images to be processed at interactive processing speeds, fulfilling demands of both efficiency and interactivitythat exist in the medical domain.

  • 18.
    Löw, Joakim
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    ABC - BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2013Ingår i: Eurographics 24th Symposium on Rendering: Posters, 2013Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Glossy surface reflectance is hard to model accuratley using traditional parametric BRDF models. An alternative is provided by data driven reflectance models, however these models offers less user control and generally results in lower efficency. In our work we propose two new lightweight parameteric BRDF models for accurate modeling of glossy surface refllectance, one inspired by Rayleigh-Rice theory for optically smooth surfaces and one inspired by microfacet-theory. We base our models on a thourough study of the scattering behaviour of measured reflectance data from the MERL database. The study focuses on two key aspects of BRDF models, parametrization and scatter distribution. We propose a new scattering distributuion for glossy BRDFs inspired by the ABC model for surface statistics of optically smooth surfaces. Based on the survey we consider two parameterizations, one based on micro-facet theory using the halfway vector and one inspired by the parametrization for the Rayleigh-Rice BRDF model considering the projected devaition vector. To enable efficent rendering we also show how the new models can be approximatley sampled for importance sampling the scattering integral.

  • 19.
    Löw, Joakim
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2012Ingår i: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, nr 1Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

  • 20.
    Miandji, Ehsan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Compressive Image Reconstruction in Reduced Union of Subspaces2015Ingår i: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, nr 2, s. 33-44Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

  • 21.
    Miandji, Ehsan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Geometry Independent Surface Light Fields for Real TimeRendering of Precomputed Global Illumination2011Ingår i: Proceedings of SGRAD 2011 / [ed] Thomas Larsson, Lars Kjelldahl, Kai-Mikael Jää-Aro, Royal Institute of Technology, Stockholm, 2011, s. 27-34Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a framework for generating, compressing and rendering of Surface Light Field (SLF) data. Our methodis based on radiance data generated using physically based rendering methods. Thus the SLF data is generateddirectly instead of re-sampling digital photographs. Our SLF representation decouples spatial resolution fromgeometric complexity. We achieve this by uniform sampling of spatial dimension of the SLF function. For compression,we use Clustered Principal Component Analysis (CPCA). The SLF matrix is first clustered to low frequencygroups of points across all directions. Then we apply PCA to each cluster. The clustering ensures that the withinclusterfrequency of data is low, allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstruction technique ensures seamlessreconstruction of discrete SLF data. We applied our rendering method for fast, high quality off-line rendering andreal-time illumination of static scenes. The proposed framework is not limited to complexity of materials or lightsources, enabling us to render high quality images describing the full global illumination in a scene.

  • 22.
    Miandji, Ehsan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Learning based compression for real-time rendering of surface light fields2013Ingår i: Siggraph 2013 Posters, ACM Press, 2013Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Photo-realistic image synthesis in real-time is a key challenge in computer graphics. A number of techniques where the light transport in a scene is pre-computed, compressed and used for real-time image synthesis have been proposed. In this work, we extend this idea and present a technique where the radiance distribution in a scene, including arbitrarily complex materials and light sources, is pre-computed using photo-realistic rendering techniques and stored as surface light fields (SLF) at each surface. An SLF describes the full appearance of each surface in a scene as a 4D function over the spatial and angular domains. An SLF is a complex data set with a large memory footprint often in the order of several GB per object in the scene. The key contribution in this work is a novel approach for compression of surface light fields that enables real-time rendering of complex scenes. Our learning-based compression technique is based on exemplar orthogonal bases (EOB), and trains a compact dictionary of full-rank orthogonal basis pairs with sparse coefficients. Our results outperform the widely used CPCA method in terms of storage cost, visual quality and rendering speed. Compared to PRT techniques for real-time global illumination, our approach is limited to static scenes but can represent high frequency materials and any type of light source in a unified framework.

  • 23.
    Miandji, Ehsan
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes2013Ingår i: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

  • 24.
    Tsirikoglou, Apostolia
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ekeberg, Simon
    Swiss International AB, Sweden.
    Vikström, Johan
    Swiss International AB, Sweden.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Unger, Jonas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    S(wi)SS: A flexible and robust sub-surface scattering shader2014Ingår i: Proceedings of SIGRAD 2014 / [ed] Morten Fjeld, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    S(wi)SS is a new, flexible artist friendly multi-layered sub-surface scattering shader that simulates accurately subsurface scattering for a large range of translucent materials. It is a physically motivated multi-layered approach where the sub-surface scattering effect is generated using one to three layers. It enables seamless mixing of the classical dipole, the better dipole and the quantized diffusion reflectance model in the sub-surface scattering layers, and additionally provides the scattering coming of front and back illumination, as well as all the BSDFcomponents, in separate render channels enabling the artist to either use them physically accurately or tweak them independently during compositing to produce the desired result. To demonstrate the usefulness of our approach, we show a set of high quality rendering results from different user scenarios.

  • 25.
    Unger, Jonas
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Larsson, Per
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Bonnet, Gerhard
    SpheronVR, Germany.
    Kaiser, Gunnar
    SpheronVR, Germany.
    Next Generation Image Based Lighting using HDR Video2011Ingår i: Proceeding SIGGRAPH '11 ACM SIGGRAPH 2011 Talks, ACM Special Interest Group on Computer Science Education, 2011, s. article no 60-Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:

    1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.

    2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.

    3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure.

    4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.

    We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

  • 26.
    Unger, Jonas
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Hajisharif, Saghi
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Unified reconstruction of RAW HDR video data2016Ingår i: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, s. 63-82Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    Traditional HDR capture has mostly relied on merging images captured with different exposure times. While this works well for static scenes, dynamic scenes poses difficult challenges as registration of differently exposed images often leads to ghosting and other artifacts. This chapter reviews methods which capture HDR-video frames within a single exposure time, using either multiple synchronised sensors, or by multiplexing of the sensor response spatially across the sensor. Most previous HDR reconstruction methods perform demoisaicing, noise reduction, resampling (registration), and HDR-fusion in separate steps. This chapter presents a framework for unified HDR-reconstruction, including all steps in the traditional imaging pipeline in a single adaptive filtering operation, and describes an image formation model and a sensor noise model applicable to both single-, and multi-sensor systems. The benefits of using raw data directly are demonstrated with examples using input data from multiple synchronized sensors, and single images with varying per-pixel gain.

  • 27.
    Unger, Jonas
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Larsson, Per
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Löw, Joakim
    Linköpings universitet, Institutionen för teknik och naturvetenskap. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Spatially varying image based lighting using HDR-video2013Ingår i: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, nr 7, s. 923-934Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

  • 28.
    Unger, Jonas
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Larsson, Per
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynner, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Temporally and Spatially Varying Image Based Lighting using HDR-video2013Ingår i: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, s. 1-5Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

  • 29.
    Unger, Jonas
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Kronander, Joel
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Larsson, Per
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Gustavson, Stefan
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Ynnerman, Anders
    Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska högskolan.
    Image Based Lighting using HDR-video2013Ingår i: Eurographics 24th Symposium on Rendering: Posters, 2013Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    It has been widely recognized that lighting plays a key role in the realism and visual interest of computer graphics renderings. This hasled to research and development of image based lighting (IBL) techniques where the illumination conditions in real world scenes are captured as high dynamic range (HDR) image panoramas and used as lighting information during rendering. Traditional IBL where the lighting is captured at a single position in the scene has now become a widely used tool in most production pipelines. In this poster, we give an overview of a system pipeline where we use HDR-video cameras to extend traditional IBL techniques to capture real world lighting that may include variations in the spatial or temporal domains. We also describe how the capture systems and algorithms for processing and rendering have been incorporated into a robust systems pipeline for production of highly realisticrenderings. High dynamic range video based scene capture thus enables highly realistic renderings where traditional image based lighting, using a single light probe, fail to capture important details.

1 - 29 av 29
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf