liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adaptive Color Attributes for Real-Time Visual Tracking
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.ORCID iD: 0000-0002-6096-3648
Computer Vision Center, CS Dept. Universitat Autonoma de Barcelona, Spain.
2014 (English)In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2014, IEEE Computer Society, 2014, p. 1090-1097Conference paper, Published paper (Refereed)
Abstract [en]

Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.

This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second.

Place, publisher, year, edition, pages
IEEE Computer Society, 2014. p. 1090-1097
Series
IEEE Conference on Computer Vision and Pattern Recognition. Proceedings, ISSN 1063-6919
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:liu:diva-105857DOI: 10.1109/CVPR.2014.143Scopus ID: 2-s2.0-84911362613ISBN: 978-147995117-8 (print)OAI: oai:DiVA.org:liu-105857DiVA, id: diva2:711538
Conference
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, USA, June 24-27, 2014
Note

Publication status: Accepted

Available from: 2014-04-10 Created: 2014-04-10 Last updated: 2018-04-25Bibliographically approved
In thesis
1. Learning Convolution Operators for Visual Tracking
Open this publication in new window or tab >>Learning Convolution Operators for Visual Tracking
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Visual tracking is one of the fundamental problems in computer vision. Its numerous applications include robotics, autonomous driving, augmented reality and 3D reconstruction. In essence, visual tracking can be described as the problem of estimating the trajectory of a target in a sequence of images. The target can be any image region or object of interest. While humans excel at this task, requiring little effort to perform accurate and robust visual tracking, it has proven difficult to automate. It has therefore remained one of the most active research topics in computer vision.

In its most general form, no prior knowledge about the object of interest or environment is given, except for the initial target location. This general form of tracking is known as generic visual tracking. The unconstrained nature of this problem makes it particularly difficult, yet applicable to a wider range of scenarios. As no prior knowledge is given, the tracker must learn an appearance model of the target on-the-fly. Cast as a machine learning problem, it imposes several major challenges which are addressed in this thesis.

The main purpose of this thesis is the study and advancement of the, so called, Discriminative Correlation Filter (DCF) framework, as it has shown to be particularly suitable for the tracking application. By utilizing properties of the Fourier transform, a correlation filter is discriminatively learned by efficiently minimizing a least-squares objective. The resulting filter is then applied to a new image in order to estimate the target location.

This thesis contributes to the advancement of the DCF methodology in several aspects. The main contribution regards the learning of the appearance model: First, the problem of updating the appearance model with new training samples is covered. Efficient update rules and numerical solvers are investigated for this task. Second, the periodic assumption induced by the circular convolution in DCF is countered by proposing a spatial regularization component. Third, an adaptive model of the training set is proposed to alleviate the impact of corrupted or mislabeled training samples. Fourth, a continuous-space formulation of the DCF is introduced, enabling the fusion of multiresolution features and sub-pixel accurate predictions. Finally, the problems of computational complexity and overfitting are addressed by investigating dimensionality reduction techniques.

As a second contribution, different feature representations for tracking are investigated. A particular focus is put on the analysis of color features, which had been largely overlooked in prior tracking research. This thesis also studies the use of deep features in DCF-based tracking. While many vision problems have greatly benefited from the advent of deep learning, it has proven difficult to harvest the power of such representations for tracking. In this thesis it is shown that both shallow and deep layers contribute positively. Furthermore, the problem of fusing their complementary properties is investigated.

The final major contribution of this thesis regards the prediction of the target scale. In many applications, it is essential to track the scale, or size, of the target since it is strongly related to the relative distance. A thorough analysis of how to integrate scale estimation into the DCF framework is performed. A one-dimensional scale filter is proposed, enabling efficient and accurate scale estimation.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2018
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1926
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-147543 (URN)9789176853320 (ISBN)
Public defence
2018-06-11, Ada Lovelace, B-huset, Campus Valla, Linköping, 13:00 (English)
Opponent
Supervisors
Available from: 2018-05-03 Created: 2018-04-25 Last updated: 2018-05-04Bibliographically approved

Open Access in DiVA

fulltext(1132 kB)5267 downloads
File information
File name FULLTEXT01.pdfFile size 1132 kBChecksum SHA-512
060bc065f2eca80d1f7c380c7a78d8c307786f8996611092d3d3836116c48e9ef18b1fb7be91890cfacbf156179343d752e574ee6e45f146a36bdc15c7ccd605
Type fulltextMimetype application/pdf
programvara(5512 kB)0 downloads
File information
File name SOFTWARE01.zipFile size 5512 kBChecksum SHA-512
d14991900b9ba26212d75f1d0573f14edcb346de0dd4d87d1b3454621004487383eace4d7aa9418c5a0353abdb89e1b0f8c4d190c45b40a1c915b7e9e075f1d6
Type softwareMimetype application/zip

Other links

Publisher's full textScopusLink to software

Authority records BETA

Danelljan, MartinShahbaz Khan, FahadFelsberg, Michael

Search in DiVA

By author/editor
Danelljan, MartinShahbaz Khan, FahadFelsberg, Michael
By organisation
Computer VisionThe Institute of Technology
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 5267 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 25132 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf