liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Self-Supervised Siamese Autoencoders
Leuphana University of Lüneburg, Lüneburg, Germany.
Uppsala University, Uppsala, Sweden.ORCID iD: 0000-0003-2949-8781
Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-4459-4336
2024 (English)In: Advances in Intelligent Data Analysis XXII: Part I / [ed] Ioanna Miliou, Nico Piatkowski, Panagiotis Papapetrou, Springer, 2024, Vol. 14641, p. 117-128Conference paper, Published paper (Refereed)
Abstract [en]

In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.

Place, publisher, year, edition, pages
Springer, 2024. Vol. 14641, p. 117-128
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords [en]
Self-supervised learning, representation learning, siamese networks, denoising autoencoder, pre-training, image classification
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-206922DOI: 10.1007/978-3-031-58547-0_10ISI: 001295919100010Scopus ID: 2-s2.0-85192241043ISBN: 9783031585470 (electronic)ISBN: 9783031585463 (print)OAI: oai:DiVA.org:liu-206922DiVA, id: diva2:1892416
Conference
22nd International Symposium on Intelligent Data Analysis, IDA, Stockholm, SWEDEN, APR 24-26, 2024
Note

Funding Agencies|Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Available from: 2024-08-26 Created: 2024-08-26 Last updated: 2025-03-04Bibliographically approved

Open Access in DiVA

fulltext(592 kB)34 downloads
File information
File name FULLTEXT01.pdfFile size 592 kBChecksum SHA-512
0f3baac26de56fa3826cb4184a5aed7e5028abb8b511b05232b1c12e48b7c771122d78b357a9910a101860672d7637617b10a8b298d535d951ee04916478e05d
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Mair, SebastianFadel, Samuel G.

Search in DiVA

By author/editor
Mair, SebastianFadel, Samuel G.
By organisation
The Division of Statistics and Machine LearningFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 34 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 138 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf