liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Accurate Tracking by Overlap Maximization
Linköping University, Department of Electrical Engineering, Computer Vision. (Computer Vision Laboratory)
2019 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Visual object tracking is one of the fundamental problems in computer vision, with a wide number of practical applications in e.g.\ robotics, surveillance etc. Given a video sequence and the target bounding box in the first frame, a tracker is required to find the target in all subsequent frames. It is a challenging problem due to the limited training data available. An object tracker is generally evaluated using two criterias, namely robustness and accuracy. Robustness refers to the ability of a tracker to track for long durations, without losing the target. Accuracy, on the other hand, denotes how accurately a tracker can estimate the target bounding box.

Recent years have seen significant improvement in tracking robustness. However, the problem of accurate tracking has seen less attention. Most current state-of-the-art trackers resort to a naive multi-scale search strategy which has fundamental limitations. Thus, in this thesis, we aim to develop a general target estimation component which can be used to determine accurate bounding box for tracking. We will investigate how bounding box estimators used in object detection can be modified to be used for object tracking. The key difference between detection and tracking is that in object detection, the classes to which the objects belong are known. However, in tracking, no prior information is available about the tracked object, other than a single image provided in the first frame. We will thus investigate different architectures to utilize the first frame information to provide target specific bounding box predictions. We will also investigate how the bounding box predictors can be integrated into a state-of-the-art tracking method to obtain robust as well as accurate tracking.

Place, publisher, year, edition, pages
2019. , p. 42
Keywords [en]
Tracking, Accurate Tracking, IoU Prediction, Computer Vision
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:liu:diva-154653ISRN: LiTH-ISY-EX--19/5189--SEOAI: oai:DiVA.org:liu-154653DiVA, id: diva2:1291564
Subject / course
Computer Vision Laboratory
Presentation
2019-02-15, Algoritmen, 14:42 (English)
Supervisors
Examiners
Available from: 2019-03-07 Created: 2019-02-25 Last updated: 2019-03-07Bibliographically approved

Open Access in DiVA

fulltext(8140 kB)218 downloads
File information
File name FULLTEXT01.pdfFile size 8140 kBChecksum SHA-512
072915ea1a5a2e43d64d3d612da6fa35b24e4865eb723fd0ae78b7e16574d3727e8b379fb7fbf99d6e0f6d9c716f6ffd6d75c8bd927b025875d0e608d9032125
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Bhat, Goutam
By organisation
Computer Vision
Other Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 218 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 311 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf