liu.seSearch for publications in DiVA
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Feature Learning for Nonlinear Dimensionality Reduction toward Maximal Extraction of Hidden Patterns
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-6382-2752
Univ Calif Davis, CA USA.
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-9466-9826
Univ Calif Davis, CA USA.
2023 (English)In: 2023 IEEE 16TH PACIFIC VISUALIZATION SYMPOSIUM, PACIFICVIS, IEEE COMPUTER SOC , 2023, p. 122-131Conference paper, Published paper (Refereed)
Abstract [en]

Dimensionality reduction (DR) plays a vital role in the visual analysis of high-dimensional data. One main aim of DR is to reveal hidden patterns that lie on intrinsic low-dimensional manifolds. However, DR often overlooks important patterns when the manifolds are distorted or masked by certain influential data attributes. This paper presents a feature learning framework, FEALM, designed to generate a set of optimized data projections for nonlinear DR in order to capture important patterns in the hidden manifolds. These projections produce maximally different nearest-neighbor graphs so that resultant DR outcomes are significantly different. To achieve such a capability, we design an optimization algorithm as well as introduce a new graph dissimilarity measure, named neighbor-shape dissimilarity. Additionally, we develop interactive visualizations to assist comparison of obtained DR results and interpretation of each DR result. We demonstrate FEALMs effectiveness through experiments and case studies using synthetic and real-world datasets.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2023. p. 122-131
Series
IEEE Pacific Visualization Symposium, ISSN 2165-8765
Keywords [en]
Dimensionality reduction; feature learning; network comparison; Nelder-Mead optimization; UMAP; visual analytics
National Category
Other Engineering and Technologies
Identifiers
URN: urn:nbn:se:liu:diva-196945DOI: 10.1109/PacificVis56936.2023.00021ISI: 001016413500015ISBN: 9798350321241 (electronic)ISBN: 9798350321258 (print)OAI: oai:DiVA.org:liu-196945DiVA, id: diva2:1792397
Conference
IEEE 16th Pacific Visualization Symposium (IEEE PacificVis), Seoul, SOUTH KOREA, apr 18-21, 2023
Note

Funding Agencies|Knut and Alice Wallenberg Foundation [KAW 2019.0024]; U.S. National Science Foundation [ITE-2134901]; National Institute of Health [1R01CA270454-01]

Available from: 2023-08-29 Created: 2023-08-29 Last updated: 2025-02-18

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Fujiwara, TakanoriYnnerman, Anders
By organisation
Media and Information TechnologyFaculty of Science & Engineering
Other Engineering and Technologies

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 89 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf