liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
MineGAN: Effective Knowledge Transfer From GANs to Target Domains With Few Images
Universitat Autonoma de Barcelona, Spain.
Universitat Autonoma de Barcelona, Spain.
Universitat Autonoma de Barcelona, Spain.
Universitat Autonoma de Barcelona, Spain.
Show others and affiliations
2020 (English)In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2020, p. 9329-9338Conference paper, Published paper (Refereed)
Abstract [en]

One of the attractive characteristics of deep neural networks is their ability to transfer knowledge obtained in one domain to other related domains. As a result, high-quality networks can be trained in domains with relatively little training data. This property has been extensively studied for discriminative networks but has received significantly less attention for generative models. Given the often enormous effort required to train GANs, both computationally as well as in the dataset collection, the re-use of pretrained GANs is a desirable objective. We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs. This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain. Mining effectively steers GAN sampling towards suitable regions of the latent space, which facilitates the posterior finetuning and avoids pathologies of other methods such as mode collapse and lack of flexibility. We perform experiments on several complex datasets using various GAN architectures (BigGAN, Progressive GAN) and show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods. In addition, MineGAN can successfully transfer knowledge from multiple pretrained GANs. Our code is available at: https://github.com/yaxingwang/MineGAN.

Place, publisher, year, edition, pages
IEEE, 2020. p. 9329-9338
Series
Computer Society Conference on Computer Vision and Pattern Recognition, ISSN 2575-7075
Keywords [en]
Gallium nitride;Generators;Generative adversarial networks;Training;Data mining;Knowledge transfer;Computational modeling
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-168124DOI: 10.1109/CVPR42600.2020.00935ISI: 001309199902020ISBN: 978-1-7281-7168-5 (electronic)OAI: oai:DiVA.org:liu-168124DiVA, id: diva2:1458547
Conference
Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13-19 June 2020
Available from: 2020-08-17 Created: 2020-08-17 Last updated: 2025-02-07

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Khan, Fahad Shahbaz

Search in DiVA

By author/editor
Khan, Fahad Shahbaz
By organisation
Computer VisionFaculty of Science & Engineering
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 70 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf