liu.seSearch for publications in DiVA
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection
Mohamed Bin Zayed Univ, U Arab Emirates.
Mohamed Bin Zayed Univ, U Arab Emirates.
Mohamed Bin Zayed Univ, U Arab Emirates.
Mohamed Bin Zayed Univ, U Arab Emirates; Australian Natl Univ, Australia.
Show others and affiliations
2022 (English)In: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) , 2022Conference paper, Published paper (Refereed)
Abstract [en]

Existing open-vocabulary object detectors typically enlarge their vocabulary sizes by leveraging different forms of weak supervision. This helps generalize to novel objects at inference. Two popular forms of weak-supervision used in open-vocabulary detection (OVD) include pretrained CLIP model and image-level supervision. We note that both these modes of supervision are not optimally aligned for the detection task: CLIP is trained with image-text pairs and lacks precise localization of objects while the image-level supervision has been used with heuristics that do not accurately specify local object regions. In this work, we propose to address this problem by performing object-centric alignment of the language embeddings from the CLIP model. Furthermore, we visually ground the objects with only image-level supervision using a pseudo-labeling process that provides high-quality object proposals and helps expand the vocabulary during training. We establish a bridge between the above two object-alignment strategies via a novel weight transfer function that aggregates their complimentary strengths. In essence, the proposed model seeks to minimize the gap between object and image-centric representations in the OVD setting. On the COCO benchmark, our proposed approach achieves 36.6 AP(50) on novel classes, an absolute 8.2 gain over the previous best performance. For LVIS, we surpass the state-of-the-art ViLD model by 5.0 mask AP for rare categories and 3.4 overall. Code: https://github.com/hanoonaR/object-centric-ovd.

Place, publisher, year, edition, pages
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) , 2022.
Series
Advances in Neural Information Processing Systems, ISSN 1049-5258
National Category
Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:liu:diva-209537ISI: 001215469501006ISBN: 978-1-7138-7108-8 (print)OAI: oai:DiVA.org:liu-209537DiVA, id: diva2:1914084
Conference
36th Conference on Neural Information Processing Systems (NeurIPS), ELECTR NETWORK, nov 28-dec 09, 2022
Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2024-11-18

Open Access in DiVA

No full text in DiVA

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 30 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf