liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Network Comparison with Interpretable Contrastive Network Representation Learning
University of California, Davis, United States.ORCID iD: 0000-0002-6382-2752
University of Waterloo, Canada.
Toyota Research Institute.
University of Waterloo, Canada.
Show others and affiliations
2022 (English)In: Journal of Data Science, Statistics, and Visualisation, ISSN 2773-0689, Vol. 2, no 5Article in journal (Refereed) Published
Abstract [en]

Identifying unique characteristics in a network through comparison with another network is an essential network analysis task. For example, with networks of protein interactions obtained from normal and cancer tissues, we can discover unique types of interactions in cancer tissues. This analysis task could be greatly assisted by contrastive learning, which is an emerging analysis approach to discover salient patterns in one dataset relative to another. However, existing contrastive learning methods cannot be directly applied to networks as they are designed only for high-dimensional data analysis. To address this problem, we introduce a new analysis approach called contrastive network representation learning (cNRL). By integrating two machine learning schemes, network representation learning and contrastive learning, cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another. Within this approach, we also design a method, named i-cNRL, which offers interpretability in the learned results, allowing for understanding which specific patterns are only found in one network. We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets. Furthermore, we compare i-cNRL and other potential cNRL algorithm designs through quantitative and qualitative evaluations.

Place, publisher, year, edition, pages
International Association for Statistical Computing (IASC) , 2022. Vol. 2, no 5
Keywords [en]
Keywords: contrastive learning, network representation learning, interpretability, network comparison, visualization
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-187950DOI: 10.52933/jdssv.v2i5.56OAI: oai:DiVA.org:liu-187950DiVA, id: diva2:1692056
Available from: 2022-08-31 Created: 2022-08-31 Last updated: 2023-09-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Fujiwara, Takanori

Search in DiVA

By author/editor
Fujiwara, Takanori
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 86 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf