liu.seSearch for publications in DiVA
Endre søk
Begrens søket
1 - 8 of 8
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Bhat, Goutam
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Incept Inst Artificial Intelligence, U Arab Emirates.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Combining Local and Global Models for Robust Re-detection2018Inngår i: Proceedings of AVSS 2018. 2018 IEEE International Conference on Advanced Video and Signal-based Surveillance, Auckland, New Zealand, 27-30 November 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, s. 25-30Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual tracking. However, these methods still struggle in occlusion and out-of-view scenarios due to the absence of a re-detection component. While such a component requires global knowledge of the scene to ensure robust re-detection of the target, the standard DCF is only trained on the local target neighborhood. In this paper, we augment the state-of-the-art DCF tracking framework with a re-detection component based on a global appearance model. First, we introduce a tracking confidence measure to detect target loss. Next, we propose a hard negative mining strategy to extract background distractors samples, used for training the global model. Finally, we propose a robust re-detection strategy that combines the global and local appearance model predictions. We perform comprehensive experiments on the challenging UAV123 and LTB35 datasets. Our approach shows consistent improvements over the baseline tracker, setting a new state-of-the-art on both datasets.

  • 2.
    Bhat, Goutam
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Johnander, Joakim
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Unveiling the power of deep tracking2018Inngår i: Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part II / [ed] Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu and Yair Weiss, Cham: Springer Publishing Company, 2018, s. 493-509Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In the field of generic object tracking numerous attempts have been made to exploit deep features. Despite all expectations, deep trackers are yet to reach an outstanding level of performance compared to methods solely based on handcrafted features. In this paper, we investigate this key issue and propose an approach to unlock the true potential of deep features for tracking. We systematically study the characteristics of both deep and shallow features, and their relation to tracking accuracy and robustness. We identify the limited data and low spatial resolution as the main challenges, and propose strategies to counter these issues when integrating deep features for tracking. Furthermore, we propose a novel adaptive fusion approach that leverages the complementary properties of deep and shallow features to improve both robustness and accuracy. Extensive experiments are performed on four challenging datasets. On VOT2017, our approach significantly outperforms the top performing tracker from the challenge with a relative gain of >17% in EAO.

  • 3.
    Danelljan, Martin
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Gladh, Susanna
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Deep motion and appearance cues for visual tracking2019Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 124, s. 74-81Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Generic visual tracking is a challenging computer vision problem, with numerous applications. Most existing approaches rely on appearance information by employing either hand-crafted features or deep RGB features extracted from convolutional neural networks. Despite their success, these approaches struggle in case of ambiguous appearance information, leading to tracking failure. In such cases, we argue that motion cue provides discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. In this paper, we investigate the impact of deep motion features in a tracking-by-detection framework. We also evaluate the fusion of hand-crafted, deep RGB, and deep motion features and show that they contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly demonstrate that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone.

  • 4.
    Danelljan, Martin
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    ATOM: Accurate tracking by overlap maximization2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    While recent years have witnessed astonishing improvements in visual tracking robustness, the advancements in tracking accuracy have been limited. As the focus has been directed towards the development of powerful classifiers, the problem of accurate target state estimation has been largely overlooked. In fact, most trackers resort to a simple multi-scale search in order to estimate the target bounding box. We argue that this approach is fundamentally limited since target estimation is a complex task, requiring highlevel knowledge about the object. We address this problem by proposing a novel tracking architecture, consisting of dedicated target estimation and classification components. High level knowledge is incorporated into the target estimation through extensive offline learning. Our target estimation component is trained to predict the overlap between the target object and an estimated bounding box. By carefully integrating targetspecific information, our approach achieves previously unseen bounding box accuracy. We further introduce a classification component that is trained online to guarantee high discriminative power in the presence of distractors. Our final tracking framework sets a new state-of-the-art on five challenging benchmarks. On the new large-scale TrackingNet dataset, our tracker ATOM achieves a relative gain of 15% over the previous best approach, while running at over 30 FPS. Code and models are available at https://github.com/visionml/pytracking.

  • 5.
    Danelljan, Martin
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    ECO: Efficient Convolution Operators for Tracking2017Inngår i: Proceedings 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 6931-6939Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model; (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples; (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and Temple-Color. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015.

  • 6.
    Johnander, Joakim
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    On the Optimization of Advanced DCF-Trackers2018Inngår i: Computer Vision – ECCV 2018 Workshops: Munich, Germany, September 8-14, 2018, Proceedings, Part I / [ed] Laura Leal-TaixéStefan Roth, Cham: Springer Publishing Company, 2018, s. 54-69Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Trackers based on discriminative correlation filters (DCF) have recently seen widespread success and in this work we dive into their numerical core. DCF-based trackers interleave learning of the target detector and target state inference based on this detector. Whereas the original formulation includes a closed-form solution for the filter learning, recently introduced improvements to the framework no longer have known closed-form solutions. Instead a large-scale linear least squares problem must be solved each time the detector is updated. We analyze the procedure used to optimize the detector and let the popular scheme introduced with ECO serve as a baseline. The ECO implementation is revisited in detail and several mechanisms are provided with alternatives. With comprehensive experiments we show which configurations are superior in terms of tracking capabilities and optimization performance.

  • 7.
    Järemo-Lawin, Felix
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Tosteberg, Patrik
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Deep Projective 3D Semantic Segmentation2017Inngår i: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part I / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, s. 95-107Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Semantic segmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionized the field of image semantic segmentation, its impact on point cloud data has been limited so far. Recent attempts, based on 3D deep learning approaches (3D-CNNs), have achieved below-expected results. Such methods require voxelizations of the underlying point cloud data, leading to decreased spatial resolution and increased memory consumption. Additionally, 3D-CNNs greatly suffer from the limited availability of annotated datasets.

  • 8.
    Kristan, Matej
    et al.
    University of Ljubljana, Slovenia.
    Leonardis, Aleš
    University of Birmingham, United Kingdom.
    Matas, Jirí
    Czech Technical University, Czech Republic.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Pflugfelder, Roman
    Austrian Institute of Technology, Austria / TU Wien, Austria.
    Zajc, Luka Cehovin
    University of Ljubljana, Slovenia.
    Vojírì, Tomáš
    Czech Technical University, Czech Republic.
    Bhat, Goutam
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Lukezič, Alan
    University of Ljubljana, Slovenia.
    Eldesokey, Abdelrahman
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Fernández, Gustavo
    García-Martín, Álvaro
    Iglesias-Arias, Álvaro
    Alatan, A. Aydin
    González-García, Abel
    Petrosino, Alfredo
    Memarmoghadam, Alireza
    Vedaldi, Andrea
    Muhič, Andrej
    He, Anfeng
    Smeulders, Arnold
    Perera, Asanka G.
    Li, Bo
    Chen, Boyu
    Kim, Changick
    Xu, Changsheng
    Xiong, Changzhen
    Tian, Cheng
    Luo, Chong
    Sun, Chong
    Hao, Cong
    Kim, Daijin
    Mishra, Deepak
    Chen, Deming
    Wang, Dong
    Wee, Dongyoon
    Gavves, Efstratios
    Gundogdu, Erhan
    Velasco-Salido, Erik
    Khan, Fahad Shahbaz
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Yang, Fan
    Zhao, Fei
    Li, Feng
    Battistone, Francesco
    De Ath, George
    Subrahmanyam, Gorthi R. K. S.
    Bastos, Guilherme
    Ling, Haibin
    Galoogahi, Hamed Kiani
    Lee, Hankyeol
    Li, Haojie
    Zhao, Haojie
    Fan, Heng
    Zhang, Honggang
    Possegger, Horst
    Li, Houqiang
    Lu, Huchuan
    Zhi, Hui
    Li, Huiyun
    Lee, Hyemin
    Chang, Hyung Jin
    Drummond, Isabela
    Valmadre, Jack
    Martin, Jaime Spencer
    Chahl, Javaan
    Choi, Jin Young
    Li, Jing
    Wang, Jinqiao
    Qi, Jinqing
    Sung, Jinyoung
    Johnander, Joakim
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Henriques, Joao
    Choi, Jongwon
    van de Weijer, Joost
    Herranz, Jorge Rodríguez
    Martínez, José M.
    Kittler, Josef
    Zhuang, Junfei
    Gao, Junyu
    Grm, Klemen
    Zhang, Lichao
    Wang, Lijun
    Yang, Lingxiao
    Rout, Litu
    Si, Liu
    Bertinetto, Luca
    Chu, Lutao
    Che, Manqiang
    Maresca, Mario Edoardo
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Yang, Ming-Hsuan
    Abdelpakey, Mohamed
    Shehata, Mohamed
    Kang, Myunggu
    Lee, Namhoon
    Wang, Ning
    Miksik, Ondrej
    Moallem, P.
    Vicente-Moñivar, Pablo
    Senna, Pedro
    Li, Peixia
    Torr, Philip
    Raju, Priya Mariam
    Ruihe, Qian
    Wang, Qiang
    Zhou, Qin
    Guo, Qing
    Martín-Nieto, Rafael
    Gorthi, Rama Krishna
    Tao, Ran
    Bowden, Richard
    Everson, Richard
    Wang, Runling
    Yun, Sangdoo
    Choi, Seokeon
    Vivas, Sergio
    Bai, Shuai
    Huang, Shuangping
    Wu, Sihang
    Hadfield, Simon
    Wang, Siwen
    Golodetz, Stuart
    Ming, Tang
    Xu, Tianyang
    Zhang, Tianzhu
    Fischer, Tobias
    Santopietro, Vincenzo
    Štruc, Vitomir
    Wei, Wang
    Zuo, Wangmeng
    Feng, Wei
    Wu, Wei
    Zou, Wei
    Hu, Weiming
    Zhou, Wengang
    Zeng, Wenjun
    Zhang, Xiaofan
    Wu, Xiaohe
    Wu, Xiao-Jun
    Tian, Xinmei
    Li, Yan
    Lu, Yan
    Law, Yee Wei
    Wu, Yi
    Demiris, Yiannis
    Yang, Yicai
    Jiao, Yifan
    Li, Yuhong
    Zhang, Yunhua
    Sun, Yuxuan
    Zhang, Zheng
    Zhu, Zheng
    Feng, Zhen-Hua
    Wang, Zhihui
    He, Zhiqun
    The Sixth Visual Object Tracking VOT2018 Challenge Results2018Inngår i: Computer Vision – ECCV 2018 Workshops: Munich, Germany, September 8–14, 2018 Proceedings, Part I / [ed] Laura Leal-Taixé and Stefan Roth, Cham: Springer Publishing Company, 2018, s. 3-53Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).

1 - 8 of 8
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf