liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Contradistinguisher: A Vapnik’s Imperative to Unsupervised Domain Adaptation
Department of Computer Science and Automation, Indian Institute of Science, Bengaluru, Karnataka, India. (Causality)ORCID iD: 0000-0002-3329-5533
Department of Computer Science and Automation, Indian Institute of Science, Bengaluru, Karnataka, India.ORCID iD: 0000-0002-6352-6283
2022 (English)In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 44, no 9, p. 4730-4747Article in journal (Refereed) Published
Abstract [en]

Recent domain adaptation works rely on an indirect way of first aligning the source and target domain distributions and then train a classifier on the labeled source domain to classify the target domain. However, the main drawback of this approach is that obtaining a near-perfect domain alignment in itself might be difficult/impossible (e.g., language domains). To address this, inspired by how humans use supervised-unsupervised learning to perform tasks seamlessly across multiple domains or tasks, we follow Vapnik’s imperative of statistical learning that states any desired problem should be solved in the most direct way rather than solving a more general intermediate task and propose a direct approach to domain adaptation that does not require domain alignment. We propose a model referred to as Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way and classify in a supervised way on the source domain. We achieve the state-of-the-art on Office-31, Digits and VisDA-2017 datasets in both single-source and multi-source settings. We demonstrate that performing data augmentation results in an improvement in the performance over vanilla approach. We also notice that the contradistinguish-loss enhances performance by increasing the shape bias.

Place, publisher, year, edition, pages
Piscataway, NJ, United States: Institute of Electrical and Electronics Engineers (IEEE), 2022. Vol. 44, no 9, p. 4730-4747
Keywords [en]
Contrastive feature learning, deep learning, domain adaptation, transfer learning, unsupervised learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-187136DOI: 10.1109/TPAMI.2021.3071225PubMedID: 33822721Scopus ID: 2-s2.0-85103877879OAI: oai:DiVA.org:liu-187136DiVA, id: diva2:1685829
Note

Funding: The authors would like to thank the Ministry of Human Resource Development (MHRD), Government of India, for their generous funding towards this work through the UAY Project: IISc 001.

Available from: 2022-08-05 Created: 2022-08-05 Last updated: 2023-09-29Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Balgi, Sourabh

Search in DiVA

By author/editor
Balgi, SourabhDukkipati, Ambedkar
In the same journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 126 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf