liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep motion features for visual tracking
Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.
Linköping University, The Institute of Technology. Linköping University, Department of Electrical Engineering, Computer Vision.ORCID iD: 0000-0002-6096-3648
2016 (English)In: Proceedings of the 23rd International Conference on, Pattern Recognition (ICPR), 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, 1243-1248 p.Conference paper, Published paper (Refereed)
Abstract [en]

Robust visual tracking is a challenging computer vision problem, with many real-world applications. Most existing approaches employ hand-crafted appearance features, such as HOG or Color Names. Recently, deep RGB features extracted from convolutional neural networks have been successfully applied for tracking. Despite their success, these features only capture appearance information. On the other hand, motion cues provide discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. This paper presents an investigation of the impact of deep motion features in a tracking-by-detection framework. We further show that hand-crafted, deep RGB, and deep motion features contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly suggest that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016. 1243-1248 p.
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-137896DOI: 10.1109/ICPR.2016.7899807ISI: 000406771301042Scopus ID: 2-s2.0-85019098606ISBN: 9781509048472 (electronic)ISBN: 9781509048489 (print)OAI: oai:DiVA.org:liu-137896DiVA: diva2:1104308
Conference
The 23rd International Conference on, Pattern Recognition (ICPR), Cancun, Mexico, 4-8 Dec. 2016
Available from: 2017-05-31 Created: 2017-05-31 Last updated: 2017-10-05Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Gladh, SusannaDanelljan, MartinKhan, Fahad ShahbazFelsberg, Michael
By organisation
The Institute of TechnologyComputer Vision
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 194 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf