liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Nordberg, Klas
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Viksten, Fredrik
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    A local geometry based descriptor for 3D data: Addendum on rank and segment extraction2010Report (Other academic)
    Abstract [en]

    This document is an addendum to the main text in A local geometry-based descriptor for 3D data applied to object pose estimation by Fredrik Viksten and Klas Nordberg. This addendum gives proofs for propositions stated in the main document. This addendum also details how to extract information from the fourth order tensor refered to as S22 in the main document.

  • 2.
    Nordberg, Klas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Viksten, Fredrik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Estimation of a tensor based representation for geometrical 3D primitives based on motion stereo2004In: Swedish Symposium on Image Analysis (SSBA), 2004, p. 13-16Conference paper (Other academic)
    Abstract [en]

     A novel method for estimating a second order scene tensor is described and results using that method on a synthetic image sequence are shown. It is shown that the tensors can be used to represent basic geometrical entities. A short discussion on what work needs to be done to extend the tensorial description here in to a framework of pose estimation is found at the end of the report.

  • 3.
    Nordberg, Klas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Viksten, Fredrik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Motion based estimation and representation of 3D surfaces and boundaries2004In: International Workshop on Complex Motion (IWCM) / [ed] Bernd JähneRudolf MesterErhardt BarthHanno Scharr, Springer: Berlin / Heidelberg , 2004Conference paper (Refereed)
    Abstract [en]

     This paper presents a novel representation for 3D shapes in terms of planar surface patches and their boundaries. The representation is based on a tensor formalism similar to the usual orientation tensor but extends this concept by using projective spaces and a fourth order tensor, even though the practical computations can be made in normal matrix algebra. This paper also discusses the possibility of estimating the proposed representation from motion field which are generated by a calibrated camera moving in the scene. One method based on 3D spatio-temporal orientation tensors is presented and results from this method are included.

  • 4.
    Sommer, Gerald
    et al.
    Cognitive Systems Group, Christian-Albrechts-University, Kiel, Germany.
    Granlund, Gösta
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Granert, Oliver
    Cognitive Systems Group, Christian-Albrechts-University, Kiel, Germany.
    Krause, Martin
    Cognitive Systems Group, Christian-Albrechts-University, Kiel, Germany.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Perwass, Christian
    Cognitive Systems Group, Christian-Albrechts-University, Kiel, Germany.
    Söderberg, Robert
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Viksten, Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Chavarria, Marco
    Cognitive Systems Group, Christian-Albrechts-University, Kiel, Germany.
    Information Society Technologies (IST) programme: Final Report2005Report (Other academic)
    Abstract [en]

    To summarize, the VISATEC project was initiated to combine the specific scientific competencies of the research groups at CAU and LiU, together with the industrial view on vision applications, in order to develop novel, more robust algorithms for object localization and recognition. This goal was achieved by a two-fold strategy, whereby on the one hand more robust basic algorithms were developed and on the other hand a method for the combination of these algorithms was devised. In particular, the latter confirmed the consortium’s belief that an appropriate combination of a number of basic algorithms will lead to more robust results than a single method could do.

    However, the multi-cue integration is just one algorithm of many that were developed in the VISATEC project. All developed algorithms are described in some detail in the remainder of this report. An overview of the respective publications can be found in appendix.

    Despite some difficulties that were encountered on the way, we as a consortium feel that the VISATEC project was a success. That this is not only our opinion reflects in the outcome of the final review. We believe that the work that was done during these three years of the project, not only furthered our understanding of the matter, but also added to the knowledge within the scientific community and showed new possibilities for industrial vision applications.

  • 5.
    Viksten, Fredrik
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Local Features for Range and Vision-Based Robotic Automation2010Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Robotic automation has been a part of state-of-the-art manufacturing for many decades. Robotic manipulators are used for such tasks as welding, painting, pick and place tasks etc. Robotic manipulators are quite flexible and adaptable to new tasks, but a typical robot-based production cell requires extensive specification of the robot motion and construction of tools and fixtures for material handling. This incurs a large effort both in time and monetary expenses. The task of a vision system in this setting is to simplify the control and guidance of the robot and to reduce the need for supporting material handling machinery.

    This dissertation examines performance and properties of the current state-of-the-art local features within the setting of object pose estimation. This is done through an extensive set of experiments replicating various potential problems to which a vision system in a robotic cell could be subjected. The dissertation presents new local features which are shown to increase the performance of object pose estimation. A new local descriptor details how to use log-polar sampled image patches for truly rotational invariant matching. This representation is also extended to use a scale-space interest point detector which in turn makes it very competitive in our experiments. A number of variations of already available descriptors are constructed resulting in new and competitive features, among them a scale-space based Patch-duplet.

    In this dissertation a successful vision-based object pose estimation system is extended for multi-cue integration, yielding increased robustness and accuracy. Robustness is increased through algorithmic multi-cue integration, combining the individual strengths of multiple local features. Increased accuracy is achieved by utilizing manipulator movement and applying temporal multi-cue integration. This is implemented using a real flexible robotic manipulator arm.

    Besides work done on local features for ordinary image data a number of local features for range data has also been developed. This dissertation describes the theory behind and the application of the scene tensor to the problem of object pose estimation. The scene tensor is a fourth order tensor representation using projective geometry. It is shown how to use the scene tensor as a detector as well as how to apply it to the task of object pose estimation. The object pose estimation system is extended to work with 3D data.

    A novel way of handling sampling of range data when constructing a detector is discussed. A volume rasterization method is presented and the classic Harris detector is adapted to it. Finally, a novel region detector, called Maximally Robust Range Regions, is presented. All developed detectors are compared in a detector repeatability test.

    List of papers
    1. A Local Single-Patch Feature for Pose Estimation Using the Log-Polar Transform: Revised Version
    Open this publication in new window or tab >>A Local Single-Patch Feature for Pose Estimation Using the Log-Polar Transform: Revised Version
    (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    This paper presents a local image feature, based on the log-polartransform which together with the Fourier transform enables feature matching invariant to orientation and scalechanges. It is shown that this feature can be used for poseestimation of 3D objects with unknown pose, with clutteredbackground and with occlusion. The proposed method is compared to apreviously published one and the new feature is found to be about asgood or better as the old one for this task.

    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-57326 (URN)
    Note
    This is a revised version of http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-48200Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12Bibliographically approved
    2. Increasing Pose Estimation Performance using Multi-cue Integration
    Open this publication in new window or tab >>Increasing Pose Estimation Performance using Multi-cue Integration
    2006 (English)In: IEEE International Conference on Robotic and Automation (ICRA), IEEE , 2006, p. 3760-3767Conference paper, Published paper (Refereed)
    Abstract [en]

    We have developed a system which integrates the information output from several pose estimation algorithms and from several views of the scene. It is tested in a real setup with a robotic manipulator. It is shown that integrating pose estimates from several algorithms increases the overall performance of the pose estimation accuracy as well as the robustness as compared to using only a single algorithm. It is shown that increased robustness can be achieved by using pose estimation algorithms based on complementary features, so called algorithmic multi-cue integration (AMC). Furthermore it is also shown that increased accuracy can be achieved by integrating pose estimation results from different views of the scene, so-called temporal multi-cue integration (TMC). Temporal multi-cue integration is the most interesting aspect of this paper.

    Place, publisher, year, edition, pages
    IEEE, 2006
    Series
    Robotics and Automation, ISSN 1050-4729
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-37180 (URN)10.1109/ROBOT.2006.1642277 (DOI)33871 (Local ID)0-7803-9505-0 (ISBN)33871 (Archive number)33871 (OAI)
    Conference
    IEEE International Conference on Robotic and Automation (ICRA), May 15-19, Orlando, Florida, USA
    Projects
    VISATEC
    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-13
    3. A Local Geometry-Based Descriptor for 3D Data Applied to Object Pose Estimation
    Open this publication in new window or tab >>A Local Geometry-Based Descriptor for 3D Data Applied to Object Pose Estimation
    2010 (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    A local descriptor for 3D data, the scene tensor, is presentedtogether with novel applications.  It can describe multiple planarsegments in a local 3D region; for the case of up to three segments itis possible to recover the geometry of the local region in terms of thesize, position and orientation of each of the segments from thedescriptor. In the setting of range data, this property makes thedescriptor unique compared to other popular local descriptors, such asspin images or point signatures.  The estimation of the descriptor canbe based on 3D orientation tensors that, for example, can be computeddirectly from surface normals but the representation itself does notdepend on a specific estimation method and can also be applied to othertypes of 3D data, such as motion stereo. A series of experiments onboth real and synthetic range data show that the proposedrepresentation can be used as a interest point detector with highrepeatability. Further, the experiments show that, at such detectedpoints, the local geometric structure can be robustly recovered, evenin the presence of noise. Last we expand a framework for object poseestimation, based on the scene tensor and previously appliedsuccessfully on 2D image data, to work also on range data. Poseestimation from real range data shows that there are advantages oversimilar descriptors in 2D and that use of range data gives superiorperformance.

    Keywords
    3D analysis, local descriptor, tensor, range data
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-57328 (URN)LiTH-ISY-R-2951 (ISRN)
    Note

    See also the addendum which is found at http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57329

    Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12Bibliographically approved
    4. Point-of-Interest Detection for Range Data
    Open this publication in new window or tab >>Point-of-Interest Detection for Range Data
    2008 (English)In: International Conference on Pattern Recognition (ICPR), IEEE , 2008, p. 1-4Conference paper, Published paper (Refereed)
    Abstract [en]

    Point-of-interest detection is a way of reducing the amount of data that needs to be processed in a certain application and is widely used in 2D image analysis. In 2D image analysis, point-of-interest detection is usually related to extraction of local descriptors for object recognition, classification, registration or pose estimation. In analysis of range data however, some local descriptors have been published in the last decade or so, but most of them do not mention any kind of point-of-interest detection. We here show how to use an extended Harris detector on range data and discuss variants of the Harris measure. All described variants of the Harris detector for 3D should also be usable in medical image analysis, but we focus on the range data case. We do present a performance evaluation of the described variants of the Harris detector on range data.

    Place, publisher, year, edition, pages
    IEEE, 2008
    Series
    Pattern Recognition, ISSN 1051-4651
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-44928 (URN)10.1109/ICPR.2008.4761179 (DOI)78311 (Local ID)978-1-4244-2175-6 (ISBN)978-1-4244-2174-9 (ISBN)78311 (Archive number)78311 (OAI)
    Conference
    International Conference on Pattern Recognition (ICPR), December 8-11, Tampa, Florida, USA
    Available from: 2009-10-10 Created: 2009-10-10 Last updated: 2018-01-12
    5. Local Image Descriptors for Full 6 Degree-of-Freedom Object Pose Estimation and Recognition
    Open this publication in new window or tab >>Local Image Descriptors for Full 6 Degree-of-Freedom Object Pose Estimation and Recognition
    2010 (English)Article in journal (Refereed) Submitted
    Abstract [en]

    Recent years have seen advances in the estimation of full 6 degree-of-freedom object pose from a single 2D image. These advances have often been presented as a result of, or together with, a new local image feature type. This paper examines how the pose accuracy and recognition robustness for such a system varies with choice of feature type. This is done by evaluating a full 6 degree-of-freedom pose estimation system for 17 different combinations of local descriptors and detectors. The evaluation is done on data sets with photos of challenging 3D objects with simple and complex backgrounds and varying illumination conditions. We examine the performance of the system under varying levels of object occlusion and we find that many features allow considerable object occlusion. From the experiments we can conclude that duplet features, that use pairs of interest points, improve pose estimation accuracy, compared to single point features. Interestingly, we can also show that many features previously used for recognition and wide-baseline stereo are unsuitable for pose estimation, one notable example are the affine covariant features that have been proven quite successful in other applications. The data sets and their ground truths are available on the web to allow future comparison with novel algorithms.

    Keywords
    bin picking, pose estimation, local features
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-57330 (URN)
    Note
    This is an extension of http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-44894Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12
    6. Object Pose Estimation using Variants of Patch-Duplet and SIFT Descriptors
    Open this publication in new window or tab >>Object Pose Estimation using Variants of Patch-Duplet and SIFT Descriptors
    2010 (English)Report (Other academic)
    Abstract [en]

    Recent years have seen a lot of work on local descriptors. In all published comparisons or evaluations, the now quite well-known SIFT-descriptor has been one of the top performers. For the application of object pose estimation, one comparison showed a local descriptor, called the Patch-duplet, of equal or better performance than SIFT. This paper examines different properties of those two descriptors by constructing and evaluating hybrids of them. We also extend upon the object pose estimation experiments of the original Patch-duplet paper. All tests use real images. We also show what impact camera calibration and image rectification has on an application such as object pose estimation. A new feature based on the Patch-duplet descriptor and the DoG detector emerges as the feature of choice under illuminiation changes in a real world application.

    Publisher
    p. 15
    Series
    LiTH-ISY-R, ISSN 1400-3902 ; 2950
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-57331 (URN)LiTH-ISY-R-2950 (ISRN)
    Note
    This is an extension of work found in http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56268Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12
    7. Maximally Robust Range Regions
    Open this publication in new window or tab >>Maximally Robust Range Regions
    2010 (English)Report (Other academic)
    Abstract [en]

    In this work we present a region detector, an adaptation to range data of the popular Maximally Stable Extremal Regions (MSER) region detector. We call this new detector Maximally Robust Range Regions (MRRR). We apply the new detector to real range data captured by a commercially available laser range camera. Using this data we evaluate the repeatability of the new detector and compare it to some other recently published detectors. The presented detector shows a repeatability which is better or the same as the best of the other detectors. The MRRR detector also offers additional data on the detected regions. The additional data could be crucial in applications such as registration or recognition.

    Publisher
    p. 8
    Series
    LiTH-ISY-R, ISSN 1400-3902 ; 2961
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-57332 (URN)LiTH-ISY-R-2961 (ISRN)
    Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12
  • 6.
    Viksten, Fredrik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Methods for vision-based robotic automation2005Licentiate thesis, monograph (Other academic)
    Abstract [en]

    This thesis presents work done within the EC-founded project VISATEC. Due to the different directions of the VISATEC project this thesis has a few different threads.

    A novel presentation scheme for medium level vision features applied to range sensor data and to image sequences. Some estimation procedures for this representation have been implemented and tested. The representation is tensor based and uses higher order tensors in a projective space. The tensor can hold information on several local structures including their relative position and orientation. This information can also be extracted from the tensor.

    A number of well-known techniques are combined in a novel way to be able to perform object pose estimation under changes of the object in position, scale and rotation from a single 2D image. The local feature used is a patch which is resampled in a log-polar pattern. A number of local features are matched to a database and the k nearest neighbors vote an object state parameters. This most probable object states are found through mean-shift clustering.

    A system using multi-cue integration as a means of reaching a higher level of system-level robustness and a higher lever of accuracy is developed and evaluated in an industrial-like-setting. The system is based around a robotic manipulator arm with an attached camera. The system is designed to solve parts of the bin-picking problem. The above mentioned 2D technique for object pose estimation is also evaluated within this system.

  • 7.
    Viksten, Fredrik
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Object Pose Estimation using Patch-Duplet/SIFT Hybrids2009In: Proceedings of the 11th IAPR Conference on Machine Vision Applications / [ed] Hideo SAITO, Tokyo, Japan, 2009, p. 134-137Conference paper (Refereed)
    Abstract [en]

    Recent years have seen a lot of work on local descriptors. In all published comparisons or evaluations, the now quite well-known SIFT-descriptor has been one of the top performers. For the application of object pose estimation, one comparison showed a local descriptor, called the Patch-Duplet, of equal or better performance than SIFT. This paper examines different properties of those two descriptors by forming hybrids between them and extending the object pose tests of the original Patch-Duplet paper. All tests use real images.

  • 8.
    Viksten, Fredrik
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Object Pose Estimation using Variants of Patch-Duplet and SIFT Descriptors2010Report (Other academic)
    Abstract [en]

    Recent years have seen a lot of work on local descriptors. In all published comparisons or evaluations, the now quite well-known SIFT-descriptor has been one of the top performers. For the application of object pose estimation, one comparison showed a local descriptor, called the Patch-duplet, of equal or better performance than SIFT. This paper examines different properties of those two descriptors by constructing and evaluating hybrids of them. We also extend upon the object pose estimation experiments of the original Patch-duplet paper. All tests use real images. We also show what impact camera calibration and image rectification has on an application such as object pose estimation. A new feature based on the Patch-duplet descriptor and the DoG detector emerges as the feature of choice under illuminiation changes in a real world application.

  • 9.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Maximally Robust Range Regions2010Report (Other academic)
    Abstract [en]

    In this work we present a region detector, an adaptation to range data of the popular Maximally Stable Extremal Regions (MSER) region detector. We call this new detector Maximally Robust Range Regions (MRRR). We apply the new detector to real range data captured by a commercially available laser range camera. Using this data we evaluate the repeatability of the new detector and compare it to some other recently published detectors. The presented detector shows a repeatability which is better or the same as the best of the other detectors. The MRRR detector also offers additional data on the detected regions. The additional data could be crucial in applications such as registration or recognition.

  • 10.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Johansson, Björn
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
    Moe, Anders
    SICK/IVP.
    Comparison of Local Image Descriptors for Full 6 Degree-of-Freedom Pose Estimation2009In: IEEE ICRA, 2009: 1050-4729, Kobe: IEEE Robotics and Automation Society , 2009, p. 2779-2786Conference paper (Refereed)
    Abstract [en]

    Recent years have seen advances in the estimation of full 6 degree-of-freedom object pose from a single 2D image. These advances have often been presented as a result of, or together with, a new local image descriptor. This paper examines how the performance for such a system varies with choice of local descriptor. This is done by comparing the performance of a full 6 degree-of-freedom pose estimation system for fourteen types of local descriptors. The evaluation is done on a database with photos of complex objects with simple and complex backgrounds and varying lighting conditions. From the experiments we can conclude that duplet features, that use pairs of interest points, improve pose estimation accuracy, and that affine covariant features do not work well in current pose estimation frameworks. The data sets and their ground truth is available on the web to allow future comparison with novel algorithms.

  • 11.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Johansson, Björn
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Moe, Anders
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    Local Image Descriptors for Full 6 Degree-of-Freedom Object Pose Estimation and Recognition2010Article in journal (Refereed)
    Abstract [en]

    Recent years have seen advances in the estimation of full 6 degree-of-freedom object pose from a single 2D image. These advances have often been presented as a result of, or together with, a new local image feature type. This paper examines how the pose accuracy and recognition robustness for such a system varies with choice of feature type. This is done by evaluating a full 6 degree-of-freedom pose estimation system for 17 different combinations of local descriptors and detectors. The evaluation is done on data sets with photos of challenging 3D objects with simple and complex backgrounds and varying illumination conditions. We examine the performance of the system under varying levels of object occlusion and we find that many features allow considerable object occlusion. From the experiments we can conclude that duplet features, that use pairs of interest points, improve pose estimation accuracy, compared to single point features. Interestingly, we can also show that many features previously used for recognition and wide-baseline stereo are unsuitable for pose estimation, one notable example are the affine covariant features that have been proven quite successful in other applications. The data sets and their ground truths are available on the web to allow future comparison with novel algorithms.

  • 12.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Moe, Anders
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Local Single-Patch Feature for Pose Estimation Using the Log-Polar Transform: Revised VersionManuscript (preprint) (Other academic)
    Abstract [en]

    This paper presents a local image feature, based on the log-polartransform which together with the Fourier transform enables feature matching invariant to orientation and scalechanges. It is shown that this feature can be used for poseestimation of 3D objects with unknown pose, with clutteredbackground and with occlusion. The proposed method is compared to apreviously published one and the new feature is found to be about asgood or better as the old one for this task.

  • 13.
    Viksten, Fredrik
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering.
    Moe, Anders
    Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering.
    Local single-patch features for pose estimation using the log-polar transform2005In: Pattern Recognition and Image Analysis: Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, June 7-9, 2005, Proceedings, Part 1 / [ed] Jorge S. Marques, Nicolás Pérez de la Blanca, Pedro Pina, Berlin / Heidelberg: Springer, 2005, Vol. 3522, p. 44-51Conference paper (Refereed)
    Abstract [en]

    Finding the geometrical state of an object from a single 2D image is of major importance for a lot of future applications in industrial automation such as bin picking and expert systems for augmented reality as well as a whole range of consumer products including toys and house-hold appliances. Previous research in this field has showed that there are a number of steps that need to fulfill a minimum level of functionality to make the whole system operational all the way from image to pose estimate. Important properties of a real-world system for pose estimation is robustness against changes in scale, lighting condition and occlusion. Robustness to scale is usually solved by some kind scale-space approach [9], but there are so far no really good ways to achieve robustness to lighting changes and occlusion. Occlusion is usually handled by using local features which is done here also. The local feature and the framework for pose estimation presented here has been tested in a setting that is constrained to the case of knowing what object to look for, but with no information on the state of the object. The inspiration to the work presented here comes from active vision and the idea of using steerable sensors with a foveal sampling around each point of interest [11]. Each point of interest detected in this work can be seen as a point of fixation for a steerable camera that then uses foveal sampling as a means of concentrating processing in the area close to that point.

  • 14.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Geometry-Based Local Descriptor for Range Data2007In: Proceedings of the 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications: Rome, Italy, ACM , 2007, p. 210-217Conference paper (Refereed)
    Abstract [en]

    We present a novel local descriptor for range data that can describe one or more planes or lines in a local region. It is possible to recover the geometry of the described local region and extract the size, position and orientation of each local plane or line-like structure from the descriptor. This gives the descriptor a property that other popular local descriptors for range data, such as spin images or point signatures, does not have. The estimation of the descriptor is dependant on estimation of surface normals but does not depend on the specific normal estimation method used. It is shown that is possible to extract how many planar surface regions the descriptor represents and that this could be used as a point-of-interest detector.

  • 15.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Local Geometry-Based Descriptor for 3D Data Applied to Object Pose Estimation2010Manuscript (preprint) (Other academic)
    Abstract [en]

    A local descriptor for 3D data, the scene tensor, is presentedtogether with novel applications.  It can describe multiple planarsegments in a local 3D region; for the case of up to three segments itis possible to recover the geometry of the local region in terms of thesize, position and orientation of each of the segments from thedescriptor. In the setting of range data, this property makes thedescriptor unique compared to other popular local descriptors, such asspin images or point signatures.  The estimation of the descriptor canbe based on 3D orientation tensors that, for example, can be computeddirectly from surface normals but the representation itself does notdepend on a specific estimation method and can also be applied to othertypes of 3D data, such as motion stereo. A series of experiments onboth real and synthetic range data show that the proposedrepresentation can be used as a interest point detector with highrepeatability. Further, the experiments show that, at such detectedpoints, the local geometric structure can be robustly recovered, evenin the presence of noise. Last we expand a framework for object poseestimation, based on the scene tensor and previously appliedsuccessfully on 2D image data, to work also on range data. Poseestimation from real range data shows that there are advantages oversimilar descriptors in 2D and that use of range data gives superiorperformance.

  • 16.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Kalms, Mikael
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Point-of-Interest Detection for Range Data2008In: International Conference on Pattern Recognition (ICPR), IEEE , 2008, p. 1-4Conference paper (Refereed)
    Abstract [en]

    Point-of-interest detection is a way of reducing the amount of data that needs to be processed in a certain application and is widely used in 2D image analysis. In 2D image analysis, point-of-interest detection is usually related to extraction of local descriptors for object recognition, classification, registration or pose estimation. In analysis of range data however, some local descriptors have been published in the last decade or so, but most of them do not mention any kind of point-of-interest detection. We here show how to use an extended Harris detector on range data and discuss variants of the Harris measure. All described variants of the Harris detector for 3D should also be usable in medical image analysis, but we focus on the range data case. We do present a performance evaluation of the described variants of the Harris detector on range data.

  • 17.
    Viksten, Fredrik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Söderberg, Robert
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Perwass, Christian
    Increasing Pose Estimation Performance using Multi-cue Integration2006In: IEEE International Conference on Robotic and Automation (ICRA), IEEE , 2006, p. 3760-3767Conference paper (Refereed)
    Abstract [en]

    We have developed a system which integrates the information output from several pose estimation algorithms and from several views of the scene. It is tested in a real setup with a robotic manipulator. It is shown that integrating pose estimates from several algorithms increases the overall performance of the pose estimation accuracy as well as the robustness as compared to using only a single algorithm. It is shown that increased robustness can be achieved by using pose estimation algorithms based on complementary features, so called algorithmic multi-cue integration (AMC). Furthermore it is also shown that increased accuracy can be achieved by integrating pose estimation results from different views of the scene, so-called temporal multi-cue integration (TMC). Temporal multi-cue integration is the most interesting aspect of this paper.

1 - 17 of 17
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf