liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Pruning strategies in adaptive off-line tuning for optimized composition of components on heterogeneous systems
Linköpings universitet, Institutionen för datavetenskap, Programvara och system. Linköpings universitet, Tekniska högskolan. (PELAB)
Linköpings universitet, Institutionen för datavetenskap, Programvara och system. Linköpings universitet, Tekniska högskolan. (PELAB)
Linköpings universitet, Institutionen för datavetenskap, Programvara och system. Linköpings universitet, Tekniska högskolan. (PELAB)ORCID-id: 0000-0001-5241-0026
2014 (engelsk)Inngår i: 2014 43rd International Conference on Parallel Processing Workshops (ICCPW), IEEE conference proceedings, 2014, s. 255-264Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Adaptive program optimizations, such as automatic selection of the expected fastest implementation variant for a computation component depending on runtime context, are important especially for heterogeneous computing systems but require good performance models. Empirical performance models based on trial executions which require no or little human efforts show more practical feasibility if the sampling and training cost can be reduced to a reasonable level. In previous work we proposed an early version of adaptive pruning algorithm for efficient selection of training samples, a decision-tree based method for representing, predicting and selecting the fastest implementation variants for given run-time call context properties, and a composition tool for building the overall composed application from its components. For adaptive pruning we use a heuristic convexity assumption. In this paper we consolidate and improve the method by new pruning techniques to better support the convexity assumption and better control the trade-off between sampling time, prediction accuracy and runtime prediction overhead. Our results show that the training time can be reduced by up to 39 times without noticeable prediction accuracy decrease. Furthermore, we evaluate the effect of combinations of pruning strategies and compare our adaptive sampling method with random sampling. We also use our smart-sampling method as a preprocessor to a state-of-the-art decision tree learning algorithm and compare the result to the predictor directly calculated by our method.

sted, utgiver, år, opplag, sider
IEEE conference proceedings, 2014. s. 255-264
Serie
International Conference of Parallel Processing, Workshops, ISSN 1530-2016
Emneord [en]
empirical performance modeling, automated performance tuning, heterogeneous multicore system, GPU computing, adaptive program optimization, machine learning
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-118514DOI: 10.1109/ICPPW.2014.42OAI: oai:DiVA.org:liu-118514DiVA, id: diva2:815200
Konferanse
Seventh International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2) at 43rd International Conference of Parallel Processing (ICPP), Minneapolis, USA, 9-12 Sep. 2014
Prosjekter
EU FP7 EXCESSSeRC-OpCoReS
Forskningsfinansiär
EU, FP7, Seventh Framework Programme, 611183Swedish e‐Science Research Center, OpCoReSTilgjengelig fra: 2015-05-29 Laget: 2015-05-29 Sist oppdatert: 2018-01-11

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Personposter BETA

Li, LuDastgeer, UsmanKessler, Christoph

Søk i DiVA

Av forfatter/redaktør
Li, LuDastgeer, UsmanKessler, Christoph
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 34 treff
RefereraExporteraLink to record
Permanent link