liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Dornaika, Fadi
    et al.
    Computer Vision Centre, Autonomous University of Barcelona, Edifici O, Campus UAB, Bellaterra, Barcelona, Spain.
    Ahlberg, Jörgen
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Fitting 3D Face Models for Tracking and Active Appearance Model Training2006In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 24, no 9, p. 1010-1024Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider fitting a 3D deformable face model to continuous video sequences for the tasks of tracking and training. We propose two appearance-based methods that only require a simple statistical facial texture model and do not require any information about an empirical or analytical gradient matrix, since the best search directions are estimated on the fly. The first method computes the fitting using a locally exhaustive and directed search where the 3D head pose and the facial actions are simultaneously estimated. The second method decouples the estimation of these parameters. It computes the 3D head pose using a robust feature-based pose estimator incorporating a facial texture consistency measure. Then, it estimates the facial actions with an exhaustive and directed search. Fitting and tracking experiments demonstrate the feasibility and usefulness of the developed methods. A performance evaluation also shows that the proposed methods can outperform the fitting based on an active appearance model search adopting a pre-computed gradient matrix. Although the proposed schemes are not as fast as the schemes adopting a directed continuous search, they can tackle many disadvantages associated with such approaches.

  • 2.
    Fanani, Nolang
    et al.
    Goethe University, Germany.
    Stuerck, Alina
    Goethe University, Germany.
    Ochs, Matthias
    Goethe University, Germany.
    Bradler, Henry
    Goethe University, Germany.
    Mester, Rudolf
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Goethe University, Germany.
    Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?2017In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 68Article in journal (Refereed)
    Abstract [en]

    Visual odometry using only a monocular camera faces more algorithmic challenges than stereo odometry. We present a robust monocular visual odometry framework for automotive applications. An extended propagation-based tracking framework is proposed which yields highly accurate (unscaled) pose estimates. Scale is supplied by ground plane pose estimation employing street pixel labeling using a convolutional neural network (CNN). The proposed framework has been extensively tested on the KITTI dataset and achieves a higher rank than current published state-of-the-art monocular methods in the KITTI odometry benchmark. Unlike other VO/SLAM methods, this result is achieved without loop closing mechanism, without RANSAC and also without multiframe bundle adjustment. Thus, we challenge the common belief that robust systems can only be built using iterative robustification tools like RANSAC. (C) 2017 Published by Elsevier B.V.

  • 3.
    Felsberg, Michael
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Kalkan, Sinan
    University of Göttingen.
    Krüger, Norbert
    University of South Denmark.
    Continuous dimensionality characterization of image structures2009In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 27, no 6, p. 628-636Article in journal (Refereed)
    Abstract [en]

    Intrinsic dimensionality is a concept introduced by statistics and later used in image processing to measure the dimensionality of a data set. In this paper, we introduce a continuous representation of the intrinsic dimension of an image patch in terms of its local spectrum or, equivalently, its gradient field. By making use of a cone structure and barycentric co-ordinates, we can associate three confidences to the three different ideal cases of intrinsic dimensions corresponding to homogeneous image patches, edge-like structures and junctions. The main novelty of our approach is the representation of confidences as prior probabilities which can be used within a probabilistic framework. To show the potential of our continuous representation, we highlight applications in various contexts such as image structure classification, feature detection and localisation, visual scene statistics and optic flow evaluation.

    Download full text (pdf)
    FULLTEXT01
  • 4.
    Forssen, Per-Erik
    et al.
    Univ British Columbia, Dept Comp Sci, Vancouver, BC V6T 1Z4 Canada.
    Moe, Anders
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    View matching with blob features2009In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 27, no 1-2, p. 99-107Article in journal (Refereed)
    Abstract [en]

    This article introduces a new region based feature for object recognition and image matching. In contrast to many other region based features, this one makes use of colour in the feature extraction stage. We perform experiments on the repeatability rate of the features across scale and inclination angle changes, and show that avoiding to merge regions connected by only a few pixels improves the repeatability. We introduce two voting schemes that allow us to find correspondences automatically, and compare them with respect to the number of valid correspondences they give, and their inlier ratios. We also demonstrate how the matching procedure can be applied to colour correction.

  • 5.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Special issue on Perception, Action and Learning2009In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 27, no 11, p. 1639-1640Article in journal (Refereed)
  • 6.
    Granlund, Gösta H.
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Knutsson, Hans
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Westelius, Carl-Johan
    n/a.
    Wiklund, Johan
    Linköping University, Department of Electrical Engineering, Computer Vision . Linköping University, The Institute of Technology.
    Issues in Robot Vision1994In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 12, no 3, p. 131-148Article in journal (Refereed)
    Abstract [en]

    In this paper, we discuss certain issues regarding robot vision. The main theme will be the importance of the choice of information representation. We will see the implications at different parts of a robot vision structure. We deal with aspects of pre-attentive versus attentive vision, control mechanisms for low level focus of attention, and representation of motion as the orientation of hyperplanes in multdimensional time-space. Issues of scale will be touched upon, and finally, a depth-from stereo algorithm based on guadrature filter phase is presented.

  • 7.
    Larsson, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Jonsson, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Simultaneously learning to recognize and control a low-cost robotic arm2009In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 27, no 11, p. 1729-1739Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a visual servoing method based on a learned mapping between feature space and control space. Using a suitable recognition algorithm, we present and evaluate a complete method that simultaneously learns the appearance and control of a low-cost robotic arm. The recognition part is trained using an action precedes perception approach. The novelty of this paper, apart from the visual servoing method per se, is the combination of visual servoing with gripper recognition. We show that we can achieve high precision positioning without knowing in advance what the robotic arm looks like or how it is controlled.

    Download full text (pdf)
    FULLTEXT01
  • 8.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    The Weibull manifold in low-level image processing: an application to automatic image focusing.2013In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, no 5, p. 401-417Article in journal (Refereed)
    Abstract [en]

    In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy

    Download full text (pdf)
    Weibull_IMAVIS
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf