liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
MineGAN plus plus : Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Nankai Univ, Peoples R China.
WRNCH, Canada.
Univ Autonoma Barcelona, Spain.
Univ Autonoma Barcelona, Spain.
Show others and affiliations
2023 (English)In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 132, no 2, p. 490-514Article in journal (Refereed) Published
Abstract [en]

Given the often enormous effort required to train GANs, both computationally as well as in dataset collection, the re-use of pretrained GANs largely increases the potential impact of generative models. Therefore, we propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs. This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain. Mining effectively steers GAN sampling towards suitable regions of the latent space, which facilitates the posterior finetuning and avoids pathologies of other methods, such as mode collapse and lack of flexibility. Furthermore, to prevent overfitting on small target domains, we introduce sparse subnetwork selection, that restricts the set of trainable neurons to those that are relevant for the target dataset. We perform comprehensive experiments on several challenging datasets using various GAN architectures (BigGAN, Progressive GAN, and StyleGAN) and show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods. In addition, MineGAN can successfully transfer knowledge from multiple pretrained GANs. MineGAN.

Place, publisher, year, edition, pages
SPRINGER , 2023. Vol. 132, no 2, p. 490-514
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-198074DOI: 10.1007/s11263-023-01882-yISI: 001062331300001OAI: oai:DiVA.org:liu-198074DiVA, id: diva2:1800106
Note

Funding Agencies|Huawei Kirin Solution; MCIN/AEI [PID2019-104174GB-I00, PID2021-128178OB-I00]; ERDF A way of making Europe; Ramon y Cajal fellowship - MCIN/AEI [RYC2019-027020-I]; CERCA Programme of Generalitat de Catalunya; Youth Foundation [62202243]

Available from: 2023-09-25 Created: 2023-09-25 Last updated: 2024-04-12

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
In the same journal
International Journal of Computer Vision
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 35 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf