liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Gogic, Ivan
    et al.
    Univ Zagreb, Croatia.
    Manhart, Martina
    Univ Zagreb, Croatia.
    Pandzic, Igor S.
    Univ Zagreb, Croatia.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Fast facial expression recognition using local binary features and shallow neural networks2020In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 36, no 1, p. 97-112Article in journal (Refereed)
    Abstract [en]

    Facial expression recognition applications demand accurate and fast algorithms that can run in real time on platforms with limited computational resources. We propose an algorithm that bridges the gap between precise but slow methods and fast but less precise methods. The algorithm combines gentle boost decision trees and neural networks. The gentle boost decision trees are trained to extract highly discriminative feature vectors (local binary features) for each basic facial expression around distinct facial landmark points. These sparse binary features are concatenated and used to jointly optimize facial expression recognition through a shallow neural network architecture. The joint optimization improves the recognition rates of difficult expressions such as fear and sadness. Furthermore, extensive experiments in both within- and cross-database scenarios have been conducted on relevant benchmark data sets for facial expression recognition: CK+, MMI, JAFFE, and SFEW 2.0. The proposed method (LBF-NN) compares favorably with state-of-the-art algorithms while achieving an order of magnitude improvement in execution time.

  • 2.
    Günther, David
    et al.
    Saarbrücken, Germany.
    Reininghaus, Jan
    Zuse Institute Berlin, Germany.
    Wagner, Huber
    Lojasiewicza 6, Krakow, Poland .
    Hotz, Ingrid
    Zuse Institute Berlin, Germany.
    Efficient Computation of 3D Morse-Smale Complexes and Persistent Homology using Discrete Morse Theory2012In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 28, no 10, p. 959-969Article in journal (Refereed)
    Abstract [en]

    We propose an efficient algorithm that computes the Morse–Smale complex for 3D gray-scale images. This complex allows for an efficient computation of persistent homology since it is, in general, much smaller than the input data but still contains all necessary information. Our method improves a recently proposed algorithm to extract the Morse–Smale complex in terms of memory consumption and running time. It also allows for a parallel computation of the complex. The computational complexity of the Morse–Smale complex extraction solely depends on the topological complexity of the input data. The persistence is then computed using the Morse–Smale complex by applying an existing algorithm with a good practical running time. We demonstrate that our method allows for the computation of persistent homology for large data on commodity hardware.

  • 3.
    Miandji, Ehsan
    et al.
    Sharif University of Technology, Tehran, Iran.
    Sargazi Moghaddam, Mohammad Hadi
    Sharif University of Technology, Tehran, Iran.
    Samavati, Faramarz
    University of Calgary, Calgary, Alberta, Canada.
    Emadi, Mohammad
    Sharif University of Technology, Tehran, Iran.
    Real-time multi-band synthesis of ocean water with new iterative up-sampling technique2009In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 25, no 5-7, p. 697-705Article in journal (Refereed)
    Abstract [en]

    Adapting natural phenomena rendering for realtime applications has become a common practice in computer graphics. We propose a GPU-based multi-band methodfor optimized synthesis of “far from coast” ocean waves using an empirical Fourier domain model. Instead of performing two independent syntheses for low- and high-band frequencies of ocean waves, we perform only low-band synthesis and employ results to reproduce high frequency details ofocean surface by an optimized iterative up-sampling stage.Our experimental results show that this approach greatlyimproves the performance of original multi-band synthesiswhile maintaining image quality.

  • 4.
    Tongbuasirilai, Tanaboon
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kurt, Murat
    International Computer Institute, Ege University, Izmir, Turkey.
    Compact and intuitive data-driven BRDF models2020In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 36, no 4, p. 855-872Article in journal (Refereed)
    Abstract [en]

    Measured materials are rapidly becoming a core component in the photo-realistic image synthesis pipeline. The reason is that data-driven models can easily capture the underlying, fine details that represent the visual appearance of materials, which can be difficult or even impossible to model by hand. There are, however, a number of key challenges that need to be solved in order to enable efficient capture, representation and interaction with real materials. This paper presents two new data-driven BRDF models specifically designed for 1D separability. The proposed 3D and 2D BRDF representations can be factored into three or two 1D factors, respectively, while accurately representing the underlying BRDF data with only small approximation error. We evaluate the models using different parameterizations with different characteristics and show that both the BRDF data itself and the resulting renderings yield more accurate results in terms of both numerical errors and visual results compared to previous approaches. To demonstrate the benefit of the proposed factored models, we present a new Monte Carlo importance sampling scheme and give examples of how they can be used for efficient BRDF capture and intuitive editing of measured materials.

    Download full text (pdf)
    Compact and intuitive data-driven BRDF models
  • 5.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering2007In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 23, no 7, p. 453-465Article in journal (Refereed)
    Abstract [en]

    We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second.

    By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.

    Download full text (pdf)
    preprint
1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf