liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-grained Temporal Prototype Learning for Few-shot Video Object Segmentation
Mohamed bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Northwestern Polytech Univ, CA USA.
Natl Univ Singapore, Singapore.
Northwestern Polytech Univ, CA USA.
Show others and affiliations
2023 (English)In: 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), IEEE COMPUTER SOC , 2023, p. 18816-18825Conference paper, Published paper (Refereed)
Abstract [en]

Few-Shot Video Object Segmentation (FSVOS) aims to segment objects in a query video with the same category defined by a few annotated support images. However, this task was seldom explored. In this work, based on IPMT, a state-of-the-art few-shot image segmentation method that combines external support guidance information with adaptive query guidance cues, we propose to leverage multi-grained temporal guidance information for handling the temporal correlation nature of video data. We decompose the query video information into a clip prototype and a memory prototype for capturing local and long-term internal temporal guidance, respectively. Frame prototypes are further used for each frame independently to handle fine-grained adaptive guidance and enable bidirectional clip-frame prototype communication. To reduce the influence of noisy memory, we propose to leverage the structural similarity relation among different predicted regions and the support for selecting reliable memory frames. Furthermore, a new segmentation loss is also proposed to enhance the category discriminability of the learned prototypes. Experimental results demonstrate that our proposed video IPMT model significantly outperforms previous models on two benchmark datasets. Code is available at https://github.com/nankepan/VIPMT.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2023. p. 18816-18825
Series
IEEE International Conference on Computer Vision, ISSN 1550-5499, E-ISSN 2380-7504
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-203143DOI: 10.1109/ICCV51070.2023.01729ISI: 001169500503041ISBN: 9798350307184 (electronic)ISBN: 9798350307191 (print)OAI: oai:DiVA.org:liu-203143DiVA, id: diva2:1855326
Conference
IEEE/CVF International Conference on Computer Vision (ICCV), Paris, FRANCE, oct 02-06, 2023
Note

Funding Agencies|National Natural Science Foundation of China [62071388, 62136007, U20B2065, 62036005]; Fundamental Research Funds for the Central Universities [D5000230057]

Available from: 2024-04-30 Created: 2024-04-30 Last updated: 2025-02-07

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 97 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf