Recent domain adaptation works rely on an indirect way of first aligning the source and target domain distributions and then train a classifier on the labeled source domain to classify the target domain. However, the main drawback of this approach is that obtaining a near-perfect domain alignment in itself might be difficult/impossible (e.g., language domains). To address this, inspired by how humans use supervised-unsupervised learning to perform tasks seamlessly across multiple domains or tasks, we follow Vapnik’s imperative of statistical learning that states any desired problem should be solved in the most direct way rather than solving a more general intermediate task and propose a direct approach to domain adaptation that does not require domain alignment. We propose a model referred to as Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way and classify in a supervised way on the source domain. We achieve the state-of-the-art on Office-31, Digits and VisDA-2017 datasets in both single-source and multi-source settings. We demonstrate that performing data augmentation results in an improvement in the performance over vanilla approach. We also notice that the contradistinguish-loss enhances performance by increasing the shape bias.
Funding: The authors would like to thank the Ministry of Human Resource Development (MHRD), Government of India, for their generous funding towards this work through the UAY Project: IISc 001.