liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Nordberg, Klas
Publications (10 of 57) Show all publications
Zografos, V., Lenz, R., Ringaby, E., Felsberg, M. & Nordberg, K. (2015). Fast segmentation of sparse 3D point trajectories using group theoretical invariants. In: D. Cremers, I. Reid, H. Saito, M.-H. Yang (Ed.), COMPUTER VISION - ACCV 2014, PT IV: . Paper presented at 12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014 (pp. 675-691). Springer, 9006
Open this publication in new window or tab >>Fast segmentation of sparse 3D point trajectories using group theoretical invariants
Show others...
2015 (English)In: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, p. 675-691Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9006
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-114313 (URN)10.1007/978-3-319-16817-3_44 (DOI)000362444500044 ()978-3-31916-816-6 (ISBN)978-3-31916-817-3 (ISBN)
Conference
12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014
Projects
VPSCUASETT
Available from: 2015-02-18 Created: 2015-02-18 Last updated: 2018-01-31
Piccini, T., Persson, M., Nordberg, K., Felsberg, M. & Mester, R. (2015). Good Edgels to Track: Beating the Aperture Problem with Epipolar Geometry. In: Agapito, Lourdes and Bronstein, Michael M. and Rother, Carsten (Ed.), COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II: . Paper presented at 13th European Conference on Computer Vision (ECCV) (pp. 652-664). Elsevier
Open this publication in new window or tab >>Good Edgels to Track: Beating the Aperture Problem with Epipolar Geometry
Show others...
2015 (English)In: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II / [ed] Agapito, Lourdes and Bronstein, Michael M. and Rother, Carsten, Elsevier, 2015, p. 652-664Conference paper, Published paper (Refereed)
Abstract [en]

An open issue in multiple view geometry and structure from motion, applied to real life scenarios, is the sparsity of the matched key-points and of the reconstructed point cloud. We present an approach that can significantly improve the density of measured displacement vectors in a sparse matching or tracking setting, exploiting the partial information of the motion field provided by linear oriented image patches (edgels). Our approach assumes that the epipolar geometry of an image pair already has been computed, either in an earlier feature-based matching step, or by a robustified differential tracker. We exploit key-points of a lower order, edgels, which cannot provide a unique 2D matching, but can be employed if a constraint on the motion is already given. We present a method to extract edgels, which can be effectively tracked given a known camera motion scenario, and show how a constrained version of the Lucas-Kanade tracking procedure can efficiently exploit epipolar geometry to reduce the classical KLT optimization to a 1D search problem. The potential of the proposed methods is shown by experiments performed on real driving sequences.

Place, publisher, year, edition, pages
Elsevier, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 8926
Keywords
Densification; Tracking; Epipolar geometry; Lucas-Kanade; Feature extraction; Edgels; Edges
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-121565 (URN)10.1007/978-3-319-16181-5_50 (DOI)000362495500050 ()978-3-319-16180-8 (ISBN)
Conference
13th European Conference on Computer Vision (ECCV)
Available from: 2015-09-25 Created: 2015-09-25 Last updated: 2018-01-23Bibliographically approved
Meneghetti, G., Danelljan, M., Felsberg, M. & Nordberg, K. (2015). Image alignment for panorama stitching in sparsely structured environments. In: Paulsen, Rasmus R., Pedersen, Kim S. (Ed.), : . Paper presented at 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015.
Open this publication in new window or tab >>Image alignment for panorama stitching in sparsely structured environments
2015 (English)In: / [ed] Paulsen, Rasmus R., Pedersen, Kim S., 2015Conference paper, Published paper (Refereed)
Abstract [en]

Panorama stitching of sparsely structured scenes is an open research problem. In this setting, feature-based image alignment methods often fail due to shortage of distinct image features. Instead, direct image alignment methods, such as those based on phase correlation, can be applied. In this paper we investigate correlation-based image alignment techniques for panorama stitching of sparsely structured scenes. We propose a novel image alignment approach based on discriminative correlation filters (DCF), which has recently been successfully applied to visual tracking. Two versions of the proposed DCF-based approach are evaluated on two real and one synthetic panorama dataset of sparsely structured indoor environments. All three datasets consist of images taken on a tripod rotating 360 degrees around the vertical axis through the optical center. We show that the proposed DCF-based methods outperform phase correlation-based approaches on these datasets.

Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
Image alignment, Panorama stitching, Image registration, Phase correlation, Discriminative correlation filters
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-121566 (URN)10.1007/978-3-319-19665-7_36 (DOI)978-3-319-19664-0 (ISBN)978-3-319-19665-7 (ISBN)
Conference
19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015
Projects
VPS
Available from: 2015-09-25 Created: 2015-09-25 Last updated: 2018-02-19
Zografos, V. & Nordberg, K. (2011). Fast and accurate motion segmentation using linear combination of views. In: BMVC 2011: . Paper presented at 2011 22nd British Machine Vision Conference, BMVC 2011; Dundee; United Kingdom (pp. 12.1-12.11).
Open this publication in new window or tab >>Fast and accurate motion segmentation using linear combination of views
2011 (English)In: BMVC 2011, 2011, p. 12.1-12.11Conference paper, Published paper (Refereed)
Abstract [en]

We introduce a simple and efficient procedure for the segmentation of rigidly moving objects, imaged under an affine camera model. For this purpose we revisit the theory of "linear combination of views" (LCV), proposed by Ullman and Basri [20], which states that the set of 2d views of an object undergoing 3d rigid transformations, is embedded in a low-dimensional linear subspace that is spanned by a small number of basis views. Our work shows, that one may use this theory for motion segmentation, and cluster the trajectories of 3d objects using only two 2d basis views. We therefore propose a practical motion segmentation method, built around LCV, that is very simple to implement and use, and in addition is very fast, meaning it is well suited for real-time SfM and tracking applications. We have experimented on real image sequences, where we show good segmentation results, comparable to the state-of-the-art in literature. If we also consider computational complexity, our proposed method is one of the best performers in combined speed and accuracy. © 2011. The copyright of this document resides with its authors.

National Category
Computer Systems
Identifiers
urn:nbn:se:liu:diva-72921 (URN)10.5244/C.25.12 (DOI)1-901725-43-X (ISBN)
Conference
2011 22nd British Machine Vision Conference, BMVC 2011; Dundee; United Kingdom
Available from: 2011-12-16 Created: 2011-12-10 Last updated: 2016-06-09Bibliographically approved
Nordberg, K. (2011). The Key to Three-View Geometry. International Journal of Computer Vision, 94(3), 282-294
Open this publication in new window or tab >>The Key to Three-View Geometry
2011 (English)In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, International Journal of Computer Vision, Vol. 94, no 3, p. 282-294Article in journal (Refereed) Published
Abstract [en]

In this article we describe a set of canonical transformations of the image spaces that make the description of three-view geometry very simple. The transformations depend on the three-view geometry and the canonically transformed trifocal tensor T' takes the form of a sparse array where 17 elements in well-defined positions are zero, it has a linear relation to the camera matrices and to two of the fundamental matrices, a third order relation to the third fundamental matrix, a second order relation to the other two trifocal tensors, and first order relations to the 10 three-view all-point matching constraints. In this canonical form, it is also simple to determine if the corresponding camera configuration is degenerate or co-linear. An important property of the three canonical transformations of the images spaces is that they are in SO(3). The 9 parameters needed to determine these transformations and the 9 parameters that determine the elements of T' together provide a minimal parameterization of the tensor. It does not have problems with multiple maps or multiple solutions that other parameterizations have, and is therefore simple to use. It also provides an implicit representation of the trifocal internal constraints: the sparse canonical representation of the trifocal tensor can be determined if and only if it is consistent with its internal constraints. In the non-ideal case, the canonical transformation can be determined by solving a minimization problem and a simple algorithm for determining the solution is provided. This allows us to extend the standard linear method for estimation of the trifocal tensor to include a constraint enforcement as a final step, similar to the constraint enforcement of the fundamental matrix.

Experimental evaluation of this extended linear estimation method shows that it significantly reduces the geometric error of the resulting tensor, but on average the algebraic estimation method is even better. For a small percentage of cases, however, the extended linear method gives a smaller geometric error, implying that it can be used as a complement to the algebraic method for these cases.

Place, publisher, year, edition, pages
Springer Verlag, 2011
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-67285 (URN)10.1007/s11263-011-0428-0 (DOI)
Projects
ELLIITGARNICS
Available from: 2011-04-07 Created: 2011-04-07 Last updated: 2017-12-11
Nordberg, K. & Viksten, F. (2010). A local geometry based descriptor for 3D data: Addendum on rank and segment extraction.
Open this publication in new window or tab >>A local geometry based descriptor for 3D data: Addendum on rank and segment extraction
2010 (English)Report (Other academic)
Abstract [en]

This document is an addendum to the main text in A local geometry-based descriptor for 3D data applied to object pose estimation by Fredrik Viksten and Klas Nordberg. This addendum gives proofs for propositions stated in the main document. This addendum also details how to extract information from the fourth order tensor refered to as S22 in the main document.

Series
LiTH-ISY-R, ISSN 1400-3902 ; 2951
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-57329 (URN)
Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12Bibliographically approved
Viksten, F. & Nordberg, K. (2010). A Local Geometry-Based Descriptor for 3D Data Applied to Object Pose Estimation.
Open this publication in new window or tab >>A Local Geometry-Based Descriptor for 3D Data Applied to Object Pose Estimation
2010 (English)Manuscript (preprint) (Other academic)
Abstract [en]

A local descriptor for 3D data, the scene tensor, is presentedtogether with novel applications.  It can describe multiple planarsegments in a local 3D region; for the case of up to three segments itis possible to recover the geometry of the local region in terms of thesize, position and orientation of each of the segments from thedescriptor. In the setting of range data, this property makes thedescriptor unique compared to other popular local descriptors, such asspin images or point signatures.  The estimation of the descriptor canbe based on 3D orientation tensors that, for example, can be computeddirectly from surface normals but the representation itself does notdepend on a specific estimation method and can also be applied to othertypes of 3D data, such as motion stereo. A series of experiments onboth real and synthetic range data show that the proposedrepresentation can be used as a interest point detector with highrepeatability. Further, the experiments show that, at such detectedpoints, the local geometric structure can be robustly recovered, evenin the presence of noise. Last we expand a framework for object poseestimation, based on the scene tensor and previously appliedsuccessfully on 2D image data, to work also on range data. Poseestimation from real range data shows that there are advantages oversimilar descriptors in 2D and that use of range data gives superiorperformance.

Keywords
3D analysis, local descriptor, tensor, range data
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-57328 (URN)LiTH-ISY-R-2951 (ISRN)
Note

See also the addendum which is found at http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57329

Available from: 2010-06-16 Created: 2010-06-16 Last updated: 2018-01-12Bibliographically approved
Nordberg, K. & Zografos, V. (2010). Multibody motion segmentation using the geometry of 6 points in 2D images.. In: International Conference on Pattern Recognition: ISSN 1051-4651. Paper presented at 20th International Conference on Pattern Recognition (ICPR), 23-26 Aug. 2010 (pp. 1783-1787). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Multibody motion segmentation using the geometry of 6 points in 2D images.
2010 (English)In: International Conference on Pattern Recognition: ISSN 1051-4651, Institute of Electrical and Electronics Engineers (IEEE), 2010, p. 1783-1787Conference paper, Published paper (Refereed)
Abstract [en]

We propose a method for segmenting an arbitrary number of moving objects using the geometry of 6 points in 2D images to infer motion consistency. This geometry allows us to determine whether or not observations of 6 points over several frames are consistent with a rigid 3D motion. The matching between observations of the 6 points and an estimated model of their configuration in 3D space, is quantified in terms of a geometric error derived from distances between the points and 6 corresponding lines in the image. This leads to a simple motion inconsistency score, based on the geometric errors of 6points that in the ideal case should be zero when the motion of the points can be explained by a rigid 3D motion. Initial point clusters are determined in the spatial domain and merged in motion trajectory domain based on this score. Each point is then assigned to the cluster, which gives the lowest score.Our algorithm has been tested with real image sequences from the Hopkins155 database with very good results, competing withthe state of the art methods, particularly for degenerate motion sequences. In contrast to the motion segmentation methods basedon multi-body factorization, that assume an affine camera model, the proposed method allows the mapping from 3D space to the 2D image to be fully projective.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2010
Series
International Conference on Pattern Recognition, ISSN 1051-4651
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-63166 (URN)10.1109/ICPR.2010.440 (DOI)978-1-4244-7542-1 (ISBN)
Conference
20th International Conference on Pattern Recognition (ICPR), 23-26 Aug. 2010
Available from: 2010-12-20 Created: 2010-12-13 Last updated: 2016-06-09Bibliographically approved
Wiklund, J., Nordberg, K. & Felsberg, M. (2010). Software architecture and middleware for artificial cognitive systems. In: International Conference on Cognitive Systems.
Open this publication in new window or tab >>Software architecture and middleware for artificial cognitive systems
2010 (English)In: International Conference on Cognitive Systems, 2010Conference paper, Published paper (Other academic)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-58322 (URN)
Projects
DIPLECS
Available from: 2010-08-11 Created: 2010-08-11 Last updated: 2016-05-04
Zografos, V., Nordberg, K. & Ellis, L. (2010). Sparse motion segmentation using multiple six-point consistencies.. In: The 2nd International Workshop on Video Event Categorization, Tagging and Retrieval (VECTaR 2010): . Paper presented at International Workshops on Computer Vision, ACCV 2010; Queenstown; New Zealand (pp. 338-348).
Open this publication in new window or tab >>Sparse motion segmentation using multiple six-point consistencies.
2010 (English)In: The 2nd International Workshop on Video Event Categorization, Tagging and Retrieval (VECTaR 2010), 2010, p. 338-348Conference paper, Published paper (Refereed)
Abstract [en]

We present a method for segmenting an arbitrary number of moving objects in image sequences using the geometry of 6 points in 2D to infer motion consistency. The method has been evaluated on the Hopkins155 database and surpasses current state-of-the-art methods such as SSC, both in terms of overall performance on two and three motions butalso in terms of maximum errors. The method works by nding initialclusters in the spatial domain, and then classifying each remaining pointas belonging to the cluster that minimizes a motion consistency score. In contrast to most other motion segmentation methods that are basedon an affine camera model, the proposed method is fully projective.

Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 6468
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-63170 (URN)10.1007/978-3-642-22822-3_34 (DOI)978-3-642-22821-6 (ISBN)978-3-642-22822-3 (ISBN)
Conference
International Workshops on Computer Vision, ACCV 2010; Queenstown; New Zealand
Available from: 2010-12-20 Created: 2010-12-13 Last updated: 2018-01-30Bibliographically approved
Organisations

Search in DiVA

Show all publications