Humans are very sophisticated in learning new information on a completely unknown domain because humans can contradistinguish, i.e., distinguish by contrasting qualities. We learn on a new unknown domain by jointly using unsupervised information directly from unknown domain and supervised information previously acquired knowledge from some other domain. Motivated by this supervised-unsupervised joint learning, we propose a simple model referred as Contradistinguisher (CTDR) for unsupervised domain adaptation whose objective is to jointly learn to contradistinguish on unlabeled target domain in a fully unsupervised manner along with prior knowledge acquired by supervised learning on an entirely different domain. Most recent works in domain adaptation rely on an indirect way of first aligning the source and target domain distributions and then learn a classifier on labeled source domain to classify target domain. This approach of indirect way of addressing the real task of unlabeled target domain classification has three main drawbacks. (i) The sub-task of obtaining a perfect alignment of the domain in itself might be impossible due to large domain shift (e.g., language domains). (ii) The use of multiple classifiers to align the distributions, unnecessarily increases the complexity of the neural networks leading to over-fitting in many cases. (iii) Due to distribution alignment, the domain specific information is lost as the domains get morphed. In this work, we propose a simple and direct approach that does not require domain alignment. We jointly learn CTDR on both source and target distribution for unsupervised domain adaptation task using contradistinguish loss for the unlabeled target domain in conjunction with supervised loss for labeled source domain. Our experiments show that avoiding domain alignment by directly addressing the task of unlabeled target domain classification using CTDR achieves state-of-the-art results on eight visual and four language benchmark domain adaptation datasets.
Funding: The authors would like to thank Ministry of Human Resource Development (MHRD), Government of India, for their generous funding towards this work through UAY Project: IISc 001 and IISc 010.