liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).ORCID iD: 0000-0002-6096-3648
2016 (English)In: Computer Vision - ECCV 2016, Pt V, SPRINGER INT PUBLISHING AG , 2016, Vol. 9909, 472-488 p.Conference paper, Published paper (Refereed)
Abstract [en]

Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data by including all shifted versions of a training sample. However, the underlying DCF formulation is restricted to single-resolution feature maps, significantly limiting its potential. In this paper, we go beyond the conventional DCF framework and introduce a novel formulation for training continuous convolution filters. We employ an implicit interpolation model to pose the learning problem in the continuous spatial domain. Our proposed formulation enables efficient integration of multi-resolution deep feature maps, leading to superior results on three object tracking benchmarks: OTB-2015 (+5.1% in mean OP), Temple-Color (+4.6% in mean OP), and VOT2015 (20% relative reduction in failure rate). Additionally, our approach is capable of sub-pixel localization, crucial for the task of accurate feature point tracking. We also demonstrate the effectiveness of our learning formulation in extensive feature point tracking experiments.

Place, publisher, year, edition, pages
SPRINGER INT PUBLISHING AG , 2016. Vol. 9909, 472-488 p.
Series
Lecture Notes in Computer Science, ISSN 0302-9743
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-133550DOI: 10.1007/978-3-319-46454-1_29ISI: 000389385400029ISBN: 978-3-319-46454-1; 978-3-319-46453-4 (print)OAI: oai:DiVA.org:liu-133550DiVA: diva2:1060848
Conference
14th European Conference on Computer Vision (ECCV)
Available from: 2016-12-30 Created: 2016-12-29 Last updated: 2016-12-30

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Danelljan, MartinRobinson, AndreasKhan, FahadFelsberg, Michael
By organisation
Computer VisionFaculty of Science & EngineeringCenter for Medical Image Science and Visualization (CMIV)
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 532 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf