liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Dense Gaussian Processes for Few-Shot Segmentation
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Zenseact AB, Sweden.ORCID-id: 0000-0003-2553-3367
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-1019-8634
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-6096-3648
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Mohamed bin Zayed Univ AI, U Arab Emirates.
Vise andre og tillknytning
2022 (engelsk)Inngår i: COMPUTER VISION, ECCV 2022, PT XXIX, SPRINGER INTERNATIONAL PUBLISHING AG , 2022, Vol. 13689, s. 217-234Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Few-shot segmentation is a challenging dense prediction task, which entails segmenting a novel query image given only a small annotated support set. The key problem is thus to design a method that aggregates detailed information from the support set, while being robust to large variations in appearance and context. To this end, we propose a few-shot segmentation method based on dense Gaussian process (GP) regression. Given the support set, our dense GP learns the mapping from local deep image features to mask values, capable of capturing complex appearance distributions. Furthermore, it provides a principled means of capturing uncertainty, which serves as another powerful cue for the final segmentation, obtained by a CNN decoder. Instead of a one-dimensional mask output, we further exploit the end-to-end learning capabilities of our approach to learn a high-dimensional output space for the GP. Our approach sets a new state-of-the-art on the PASCAL-5(i) and COCO-20(i) benchmarks, achieving an absolute gain of +8.4 mIoU in the COCO-20(i) 5-shot setting. Furthermore, the segmentation quality of our approach scales gracefully when increasing the support set size, while achieving robust cross-dataset transfer.

sted, utgiver, år, opplag, sider
SPRINGER INTERNATIONAL PUBLISHING AG , 2022. Vol. 13689, s. 217-234
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-191405DOI: 10.1007/978-3-031-19818-2_13ISI: 000903735000013ISBN: 9783031198175 (tryckt)ISBN: 9783031198182 (digital)OAI: oai:DiVA.org:liu-191405DiVA, id: diva2:1733506
Konferanse
17th European Conference on Computer Vision (ECCV), Tel Aviv, ISRAEL, oct 23-27, 2022
Merknad

Funding Agencies|Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; ELLIIT; ETH Future Computing Laboratory (EFCL) - Huawei Technologies; Swedish Research Council [2018-05973]

Tilgjengelig fra: 2023-02-02 Laget: 2023-02-02 Sist oppdatert: 2025-02-07

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Søk i DiVA

Av forfatter/redaktør
Johnander, JoakimEdstedt, JohanFelsberg, MichaelKhan, Fahad
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 228 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf