liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Parallel clustered low-rank approximation of graphs and its application to link prediction
University of Texas at Austin, USA.
University of Texas at Austin, USA.
University of Texas at Austin, USA.
Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.ORCID iD: 0000-0002-1542-2690
Show others and affiliations
2012 (English)In: Proceedings of the International Workshop on Languages and Compilers for Parallel Computing, Springer Berlin Heidelberg , 2012, 76-95 p.Conference paper, Published paper (Other academic)
Abstract [en]

Social network analysis has become a major research area that has impact in diverse applications ranging from search engines to product recommendation systems. A major problem in implementing social network analysis algorithms is the sheer size of many social networks, for example, the Facebook graph has more than 900 million vertices and even small networks may have tens of millions of vertices. One solution to dealing with these large graphs is dimensionality reduction using spectral or SVD analysis of the adjacency matrix of the network, but these global techniques do not necessarily take into account local structures or clusters of the network that are critical in network analysis. A more promising approach is clustered low-rank approximation: instead of computing a global low-rank approximation, the adjacency matrix is first clustered, and then a low-rank approximation of each cluster (i.e., diagonal block) is computed. The resulting algorithm is challenging to parallelize not only because of the large size of the data sets in social network analysis, but also because it requires computing with very diverse data structures ranging from extremely sparse matrices to dense matrices. In this paper, we describe the first parallel implementation of a clustered low-rank approximation algorithm for large social network graphs, and use it to perform link prediction in parallel. Experimental results show that this implementation scales well on large distributed-memory machines; for example, on a Twitter graph with roughly 11 million vertices and 63 million edges, our implementation scales by a factor of 86 on 128 processes and takes less than 2300 seconds, while on a much larger Twitter graph with 41 million vertices and 1.2 billion edges, our implementation scales by a factor of 203 on 256 processes with a running time about 4800 seconds.

Place, publisher, year, edition, pages
Springer Berlin Heidelberg , 2012. 76-95 p.
Series
Lecture Notes in Computer Science, ISSN 0302-9743 (print), 1611-3349 (online) ; 7760
National Category
Computer Science
Identifiers
URN: urn:nbn:se:liu:diva-86055DOI: 10.1007/978-3-642-37658-0_6ISBN: 978-3-642-37657-3 (print)ISBN: 978-3-642-37658-0 (print)OAI: oai:DiVA.org:liu-86055DiVA: diva2:574704
Conference
25th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2012), 11-13 September 2012, Tokyo, Japan
Available from: 2012-12-06 Created: 2012-12-06 Last updated: 2014-11-28

Open Access in DiVA

No full text

Other links

Publisher's full text

Authority records BETA

Savas, Berkant

Search in DiVA

By author/editor
Savas, Berkant
By organisation
Scientific ComputingThe Institute of Technology
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 462 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf