liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
COCOA: Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen Domains
Indian Inst Technol, India.
Mohamed Bin Zayed Univ AI, U Arab Emirates.
Indian Inst Technol, India.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Mohamed Bin Zayed Univ AI, U Arab Emirates.
2022 (English)In: 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), IEEE COMPUTER SOC , 2022, p. 1618-1627Conference paper, Published paper (Refereed)
Abstract [en]

Recent progress towards designing models that can generalize to unseen domains (i.e domain generalization) or unseen classes (i.e zero-shot learning) has embarked interest towards building models that can tackle both domain-shift and semantic shift simultaneously (i.e zero-shot domain generalization). For models to generalize to unseen classes in unseen domains, it is crucial to learn feature representation that preserves class-level (domaininvariant) as well as domain-specific information. Motivated from the success of generative zero-shot approaches, we propose a feature generative framework integrated with a COntext COnditional Adaptive (COCOA) Batch-Normalization layer to seamlessly integrate class-level semantic and domain-specific information. The generated visual features better capture the underlying data distribution enabling us to generalize to unseen classes and domains at test-time. We thoroughly evaluate our approach on established large-scale benchmarks - DomainNet, DomainNet-LS (Limited Sources) - as well as a new CUB-Corruptions benchmark, and demonstrate promising performance over baselines and state-of-the-art methods. We show detailed ablations and analysis to verify that our proposed approach indeed allows us to generate better quality visual features relevant for zero-shot domain generalization.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2022. p. 1618-1627
Series
IEEE Winter Conference on Applications of Computer Vision, ISSN 2472-6737
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-187601DOI: 10.1109/WACV51458.2022.00168ISI: 000800471201067ISBN: 9781665409155 (electronic)ISBN: 9781665409162 (print)OAI: oai:DiVA.org:liu-187601DiVA, id: diva2:1691012
Conference
22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, jan 04-08, 2022
Note

Funding Agencies|DST through the IMPRINT program [IMP/2019/000250]

Available from: 2022-08-29 Created: 2022-08-29 Last updated: 2022-08-29

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 14 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf