liu.seSearch for publications in DiVA
Change search
ReferencesLink to record
Permanent link

Direct link
Sparse-Matrix support for the SkePU library for portable CPU/GPU programming
Linköping University, Department of Computer and Information Science. (SAS PELAB)
2016 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

In this thesis work we have extended the SkePU framework by designing a new container data structure for the representation of generic two dimensional sparse matrices. Computation on matrices is an integral part of many scientific and engineering problems. Sometimes it is unnecessary to perform costly operations on zero entries of the matrix. If the number of zeroes is relatively large then a requirement for more efficient data structure arises. Beyond the sparse matrix representation, we propose an algorithm to judge the condition where computation on sparse matrices is more beneficial in terms of execution time for an ongoing computation and to adapt a matrix's state accordingly, which is the main concern of this thesis work. We present and implement an approach to switch automatically between two data container types dynamically inside the SkePU framework for a multi-core GPU-based heterogeneous system. The new sparse matrix data container supports all SkePU skeletons and nearly all SkePU operations. We provide compression and decompression algorithms from dense matrix to sparse matrix and vice versa on CPU and GPUs using SkePU data parallel skeletons. We have also implemented a context aware switching mechanism in order to switch between two data container types on the CPU or the GPU. A multi-state matrix representation, and selection on demand is also made possible.

In order to evaluate and test effectiveness and efficiency of our extension to the SkePU framework, we have considered Matrix-Vector Multiplication as our benchmark program because iterative solvers like Conjugate Gradient and Generalized Minimum Residual use Sparse Matrix-Vector Multiplication as their basic operation. Through our benchmark program we have demonstrated adaptive switching between two data container types, implementation selection between CUDA and OpenMP, and converting the data structure depending on the density of non-zeroes in a matrix. Our experiments on GPU-based architectures show that our automatic switching mechanism adapts with the fastest SkePU implementation variant, and has a limited training cost.

Place, publisher, year, edition, pages
2016. , 100 p.
Keyword [en]
Sparse Matrix, SkePU, auto tuning, CPU/GPU programming, conversion function, profile guided composition
National Category
Computer Science
URN: urn:nbn:se:liu:diva-129687ISRN: LIU-IDA/LITH-EX-A--16/020--SEOAI: diva2:942267
Subject / course
Computer science
2016-05-20, Donald Knuth, S-58183 Linköping, Sweden, Linköping, 10:00 (English)
Available from: 2016-06-27 Created: 2016-06-23 Last updated: 2016-06-27Bibliographically approved

Open Access in DiVA

ThesisReportVishistSharma(4976 kB)28 downloads
File information
File name FULLTEXT01.pdfFile size 4976 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Sharma, Vishist
By organisation
Department of Computer and Information Science
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 28 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 85 hits
ReferencesLink to record
Permanent link

Direct link