liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
CUDA: Contradistinguisher for Unsupervised Domain Adaptation
Department of Computer Science and Automation Indian Institute of Science Bengaluru, India. (Causality)ORCID iD: 0000-0002-3329-5533
Department of Computer Science and Automation Indian Institute of Science Bengaluru, India.ORCID iD: 0000-0002-6352-6283
2019 (English)In: 2019 IEEE International Conference on Data Mining (ICDM), New York, NY, United States: IEEE, 2019, p. 21-30Conference paper, Published paper (Refereed)
Abstract [en]

Humans are very sophisticated in learning new information on a completely unknown domain because humans can contradistinguish, i.e., distinguish by contrasting qualities. We learn on a new unknown domain by jointly using unsupervised information directly from unknown domain and supervised information previously acquired knowledge from some other domain. Motivated by this supervised-unsupervised joint learning, we propose a simple model referred as Contradistinguisher (CTDR) for unsupervised domain adaptation whose objective is to jointly learn to contradistinguish on unlabeled target domain in a fully unsupervised manner along with prior knowledge acquired by supervised learning on an entirely different domain. Most recent works in domain adaptation rely on an indirect way of first aligning the source and target domain distributions and then learn a classifier on labeled source domain to classify target domain. This approach of indirect way of addressing the real task of unlabeled target domain classification has three main drawbacks. (i) The sub-task of obtaining a perfect alignment of the domain in itself might be impossible due to large domain shift (e.g., language domains). (ii) The use of multiple classifiers to align the distributions, unnecessarily increases the complexity of the neural networks leading to over-fitting in many cases. (iii) Due to distribution alignment, the domain specific information is lost as the domains get morphed. In this work, we propose a simple and direct approach that does not require domain alignment. We jointly learn CTDR on both source and target distribution for unsupervised domain adaptation task using contradistinguish loss for the unlabeled target domain in conjunction with supervised loss for labeled source domain. Our experiments show that avoiding domain alignment by directly addressing the task of unlabeled target domain classification using CTDR achieves state-of-the-art results on eight visual and four language benchmark domain adaptation datasets.

Place, publisher, year, edition, pages
New York, NY, United States: IEEE, 2019. p. 21-30
Keywords [en]
computer vision; contrastive feature learning; deep learning; domain adaptation; sentiment analysis; transfer learning; unsupervised learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-187135DOI: 10.1109/ICDM.2019.00012ISI: 000555729900003OAI: oai:DiVA.org:liu-187135DiVA, id: diva2:1685828
Conference
2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8-11 Nov, 2019
Note

Funding: The authors would like to thank Ministry of Human Resource Development (MHRD), Government of India, for their generous funding towards this work through UAY Project: IISc 001 and IISc 010.

Available from: 2022-08-05 Created: 2022-08-05 Last updated: 2022-08-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Balgi, Sourabh

Search in DiVA

By author/editor
Balgi, SourabhDukkipati, Ambedkar
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 69 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf