liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Baravdish, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    GPU Accelerated Sparse Representation of Light Fields2019In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019., 2019, Vol. 4, p. 177-182Conference paper (Refereed)
    Abstract [en]

    We present a method for GPU accelerated compression of light fields. The approach is by using a dictionary learning framework for compression of light field images. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding routine by GPGPU computations. We compress the data by projecting each data point onto a set of trained multi-dimensional dictionaries and seek the most sparse representation with the least error. This is done by a parallelization of the tensor-matrix product computed on the GPU. An optimized greedy algorithm to suit computations on the GPU is also presented. The encoding of the data is done segmentally in parallel for a faster computation speed while maintaining the quality. The results shows an order of magnitude faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interacti ve compression speed.

  • 2.
    Emadi, Mohammad
    et al.
    Qualcomm Technol Inc, CA 95110 USA.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence2018In: Circuits, systems, and signal processing, ISSN 0278-081X, E-ISSN 1531-5878, Vol. 37, no 4, p. 1562-1574Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP.

  • 3.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time image based lighting with streaming HDR-lightprobe sequences2012In: Proceedings of SIGRAD 2012 / [ed] Andreas Kerren, Stefan Seipel, Linköping, Sweden, 2012Conference paper (Other academic)
    Abstract [en]

    We present a framework for shading of virtual objects using high dynamic range (HDR) light probe sequencesin real-time. Such images (light probes) are captured using a high resolution HDR camera. In each frame ofthe HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiancecalculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporalcoherence between frames to further smooth lighting variation over time. Our results show that the frameworkcan achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in thereal environment.

  • 4.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Lab, ISTI-CNR, Italy.
    Gardner, Andrew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Photorealistic rendering of mixed reality scenes2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed)
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

  • 5.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sparse representation of visual data for compression and compressed sensing2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The ongoing advances in computational photography have introduced a range of new imaging techniques for capturing multidimensional visual data such as light fields, BRDFs, BTFs, and more. A key challenge inherent to such imaging techniques is the large amount of high dimensional visual data that is produced, often requiring GBs, or even TBs, of storage. Moreover, the utilization of these datasets in real time applications poses many difficulties due to the large memory footprint. Furthermore, the acquisition of large-scale visual data is very challenging and expensive in most cases. This thesis makes several contributions with regards to acquisition, compression, and real time rendering of high dimensional visual data in computer graphics and imaging applications.

    Contributions of this thesis reside on the strong foundation of sparse representations. Numerous applications are presented that utilize sparse representations for compression and compressed sensing of visual data. Specifically, we present a single sensor light field camera design, a compressive rendering method, a real time precomputed photorealistic rendering technique, light field (video) compression and real time rendering, compressive BRDF capture, and more. Another key contribution of this thesis is a general framework for compression and compressed sensing of visual data, regardless of the dimensionality. As a result, any type of discrete visual data with arbitrary dimensionality can be captured, compressed, and rendered in real time.

    This thesis makes two theoretical contributions. In particular, uniqueness conditions for recovering a sparse signal under an ensemble of multidimensional dictionaries is presented. The theoretical results discussed here are useful for designing efficient capturing devices for multidimensional visual data. Moreover, we derive the probability of successful recovery of a noisy sparse signal using OMP, one of the most widely used algorithms for solving compressed sensing problems.

    List of papers
    1. OMP-based DOA estimation performance analysis
    Open this publication in new window or tab >>OMP-based DOA estimation performance analysis
    2018 (English)In: Digital signal processing (Print), ISSN 1051-2004, E-ISSN 1095-4333, Vol. 79, p. 57-65Article in journal (Refereed) Published
    Abstract [en]

    In this paper, we present a new performance guarantee for Orthogonal Matching Pursuit (OMP) in the context of the Direction Of Arrival (DOA) estimation problem. For the first time, the effect of parameters such as sensor array configuration, as well as signal to noise ratio and dynamic range of the sources is thoroughly analyzed. In particular, we formulate a lower bound for the probability of detection and an upper bound for the estimation error. The proposed performance guarantee is further developed to include the estimation error as a user-defined parameter for the probability of detection. Numerical results show acceptable correlation between theoretical and empirical simulations. (C) 2018 Elsevier Inc. All rights reserved.

    Place, publisher, year, edition, pages
    ACADEMIC PRESS INC ELSEVIER SCIENCE, 2018
    Keywords
    Direction of arrival; Orthogonal Matching Pursuit (OMP); Mutual coherence; Array configuration
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-149841 (URN)10.1016/j.dsp.2018.04.006 (DOI)000437386200006 ()
    Available from: 2018-08-02 Created: 2018-08-02 Last updated: 2018-11-23
    2. On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
    Open this publication in new window or tab >>On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
    2017 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed) Published
    Abstract [en]

    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

    Place, publisher, year, edition, pages
    IEEE Signal Processing Society, 2017
    Keywords
    Compressed Sensing (CS), Sparse Recovery, Orthogonal Matching Pursuit (OMP), Mutual Coherence
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-141613 (URN)10.1109/LSP.2017.2753939 (DOI)000412501600001 ()
    Available from: 2017-10-03 Created: 2017-10-03 Last updated: 2018-11-23Bibliographically approved
    3. ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
    Open this publication in new window or tab >>ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
    2016 (English)In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

    Place, publisher, year, edition, pages
    IEEE, 2016
    Series
    IEEE International Conference on Image Processing ICIP, ISSN 1522-4880
    Keywords
    compressed sensing; image completion; nonlocal; inverse problems; uniqueness conditions
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-134107 (URN)10.1109/ICIP.2016.7532813 (DOI)000390782002114 ()978-1-4673-9961-6 (ISBN)
    Conference
    23rd IEEE International Conference on Image Processing (ICIP)
    Available from: 2017-01-22 Created: 2017-01-22 Last updated: 2018-11-23
    4. Compressive Image Reconstruction in Reduced Union of Subspaces
    Open this publication in new window or tab >>Compressive Image Reconstruction in Reduced Union of Subspaces
    2015 (English)In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed) Published
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

    Place, publisher, year, edition, pages
    John Wiley & Sons Ltd, 2015
    Keywords
    Image reconstruction, compressed sensing, light field imaging
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-119639 (URN)10.1111/cgf.12539 (DOI)000358326600008 ()
    Conference
    Eurographics 2015
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081
    Available from: 2015-06-23 Created: 2015-06-23 Last updated: 2018-11-23Bibliographically approved
    5. Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
    Open this publication in new window or tab >>Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
    2013 (English)In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper, Published paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

    Place, publisher, year, edition, pages
    ACM Press, 2013
    Keywords
    computer graphics, global illumination, real-time, machine learning
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-99433 (URN)10.1145/2542355.2542385 (DOI)978-1-4503-2629-2 (ISBN)
    Conference
    SIGGRAPH Asia, 19-22 November 2013, Hong Kong
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council
    Available from: 2013-10-17 Created: 2013-10-17 Last updated: 2018-11-23Bibliographically approved
  • 6.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Emadi, Mohammad
    Qualcomm Technologies Inc., San Jose, CA, USA.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ehsan, Afshari
    Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA.
    On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence2017In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed)
    Abstract [en]

    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

  • 7.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos2019In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed)
    Abstract [en]

    In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

  • 8.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Compressive Image Reconstruction in Reduced Union of Subspaces2015In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed)
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

  • 9.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Geometry Independent Surface Light Fields for Real TimeRendering of Precomputed Global Illumination2011In: Proceedings of SGRAD 2011 / [ed] Thomas Larsson, Lars Kjelldahl, Kai-Mikael Jää-Aro, Royal Institute of Technology, Stockholm, 2011, p. 27-34Conference paper (Refereed)
    Abstract [en]

    We present a framework for generating, compressing and rendering of Surface Light Field (SLF) data. Our methodis based on radiance data generated using physically based rendering methods. Thus the SLF data is generateddirectly instead of re-sampling digital photographs. Our SLF representation decouples spatial resolution fromgeometric complexity. We achieve this by uniform sampling of spatial dimension of the SLF function. For compression,we use Clustered Principal Component Analysis (CPCA). The SLF matrix is first clustered to low frequencygroups of points across all directions. Then we apply PCA to each cluster. The clustering ensures that the withinclusterfrequency of data is low, allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstruction technique ensures seamlessreconstruction of discrete SLF data. We applied our rendering method for fast, high quality off-line rendering andreal-time illumination of static scenes. The proposed framework is not limited to complexity of materials or lightsources, enabling us to render high quality images describing the full global illumination in a scene.

  • 10.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning based compression for real-time rendering of surface light fields2013In: Siggraph 2013 Posters, ACM Press, 2013Conference paper (Other academic)
    Abstract [en]

    Photo-realistic image synthesis in real-time is a key challenge in computer graphics. A number of techniques where the light transport in a scene is pre-computed, compressed and used for real-time image synthesis have been proposed. In this work, we extend this idea and present a technique where the radiance distribution in a scene, including arbitrarily complex materials and light sources, is pre-computed using photo-realistic rendering techniques and stored as surface light fields (SLF) at each surface. An SLF describes the full appearance of each surface in a scene as a 4D function over the spatial and angular domains. An SLF is a complex data set with a large memory footprint often in the order of several GB per object in the scene. The key contribution in this work is a novel approach for compression of surface light fields that enables real-time rendering of complex scenes. Our learning-based compression technique is based on exemplar orthogonal bases (EOB), and trains a compact dictionary of full-rank orthogonal basis pairs with sparse coefficients. Our results outperform the widely used CPCA method in terms of storage cost, visual quality and rendering speed. Compared to PRT techniques for real-time global illumination, our approach is limited to static scenes but can represent high frequency materials and any type of light source in a unified framework.

  • 11.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes2013In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

  • 12.
    Miandji, Ehsan
    et al.
    Sharif University of Technology, Tehran, Iran.
    Sargazi Moghaddam, Mohammad Hadi
    Sharif University of Technology, Tehran, Iran.
    Samavati, Faramarz
    University of Calgary, Calgary, Alberta, Canada.
    Emadi, Mohammad
    Sharif University of Technology, Tehran, Iran.
    Real-time multi-band synthesis of ocean water with new iterative up-sampling technique2009In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 25, no 5-7, p. 697-705Article in journal (Refereed)
    Abstract [en]

    Adapting natural phenomena rendering for realtime applications has become a common practice in computer graphics. We propose a GPU-based multi-band methodfor optimized synthesis of “far from coast” ocean waves using an empirical Fourier domain model. Instead of performing two independent syntheses for low- and high-band frequencies of ocean waves, we perform only low-band synthesis and employ results to reproduce high frequency details ofocean surface by an optimized iterative up-sampling stage.Our experimental results show that this approach greatlyimproves the performance of original multi-band synthesiswhile maintaining image quality.

  • 13.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

  • 14.
    Mohseni, Sina
    et al.
    Noshirvani University of Technology, Iran.
    Zarei, Niloofar
    Amirkabir University of Technology, Iran.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Facial Expression Recognition Using Facial Graph2015In: FACE AND FACIAL EXPRESSION RECOGNITION FROM REAL WORLD VIDEOS, SPRINGER-VERLAG BERLIN , 2015, Vol. 8912, p. 58-66Conference paper (Refereed)
    Abstract [en]

    Automatic analysis of human facial expression is one of the challenging problems in machine vision systems. It has many applications in human-computer interactions, social robots, deceit detection, interactive video and behavior monitoring. In this paper, we developed a new method for automatic facial expression recognition based on verifying movable facial elements and tracking nodes in sequential frames. The algorithm plots a face model graph in each frame and extracts features by measuring the ratio of the facial graph sides. Seven facial expressions, including neutral pose are being classified in this study using support vector machine and other classifiers on JAFFE databases. The approach does not rely on action units, and therefore eliminates errors which are otherwise propagated to the final result due to incorrect initial identification of action units. Experimental results show that analyzing facial movements gives accurate and efficient information in order to identify different facial expressions.

1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf