liu.seSearch for publications in DiVA
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning the Model Update for Siamese Trackers
Univ Autonoma Barcelona, Spain.
Univ Autonoma Barcelona, Spain.
Univ Autonoma Barcelona, Spain.
Swiss Fed Inst Technol, Switzerland.
Show others and affiliations
2019 (English)In: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), Seoul, SOUTH KOREA, OCT 27-NOV 02, 2019, IEEE COMPUTER SOC , 2019, p. 4009-4018Conference paper, Published paper (Refereed)
Abstract [en]

Siamese approaches address the visual tracking problem by extracting an appearance template from the current frame, which is used to localize the target in the next frame. In general, this template is linearly combined with the accumulated template from the previous frame, resulting in an exponential decay of information over time. While such an approach to updating has led to improved results, its simplicity limits the potential gain likely to be obtained by learning to update. Therefore, we propose to replace the handcrafted update function with a method which learns to update. We use a convolutional neural network, called UpdateNet, which given the initial template, the accumulated template and the template of the current frame aims to estimate the optimal template for the next frame. The UpdateNet is compact and can easily be integrated into existing Siamese trackers. We demonstrate the generality of the proposed approach by applying it to two Siamese trackers, SiamFC and DaSiamRPN. Extensive experiments on VOT2016, VOT2018, LaSOT, and TrackingNet datasets demonstrate that our UpdateNet effectively predicts the new target template, outperforming the standard linear update. On the large-scale TrackingNet dataset, our UpdateNet improves the results of DaSiamRPN with an absolute gain of 3.9% in terms of success score. Code and models are available at https://github.com/zhanglichao/updatenet.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2019. p. 4009-4018
Series
IEEE International Conference on Computer Vision, ISSN 1550-5499
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-168113DOI: 10.1109/ICCV.2019.00411ISI: 000531438104016ISBN: 978-1-7281-4803-8 (print)OAI: oai:DiVA.org:liu-168113DiVA, id: diva2:1458510
Conference
IEEE/CVF International Conference on Computer Vision (ICCV)
Note

Funding Agencies|Generalitat de Catalunya CERCA Program; Spanish project [TIN2016-79717-R]

Available from: 2020-08-17 Created: 2020-08-17 Last updated: 2020-08-17

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Khan, Fahad Shahbaz

Search in DiVA

By author/editor
Khan, Fahad Shahbaz
By organisation
Computer VisionFaculty of Science & Engineering
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 21 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf