dc.creatorSolano Quinde, Lizandro Damian
dc.date.accessioned2018-01-11T16:47:50Z
dc.date.accessioned2022-10-21T00:53:42Z
dc.date.available2018-01-11T16:47:50Z
dc.date.available2022-10-21T00:53:42Z
dc.date.created2018-01-11T16:47:50Z
dc.date.issued2015-07-14
dc.identifier9781479975884
dc.identifierhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-84959361463&doi=10.1109%2fAPCASE.2015.56&partnerID=40&md5=d7c419381a7c08bab3a2f634f29bc02c
dc.identifierhttp://dspace.ucuenca.edu.ec/handle/123456789/29244
dc.identifier10.1109/APCASE.2015.56
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4627481
dc.description.abstractGraphics Processing Units (GPUs) have been successfully used to accelerate scientific applications due to their computation power and the availability of programming languages that make more approachable writing scientific applications for GPUs. However, since the programming model of GPUs requires offloading all the data to the GPU memory, the memory footprint of the application is limited to the size of the GPU memory. Multi-GPU systems can make memory limited problems tractable by parallelizing the computation and data among the available GPUs. Parallelizing applications written for running on single-GPU systems can be done (i) at runtime through an environment that captures the memory operations and kernel calls and distributes among the available GPUs, and (ii) at compile time through a pre-compiler that transforms the application for decomposing the data and computation among the available GPUs. In this paper we propose a framework and implement a tool that transforms an OpenCL application written to run on single-GPU systems into one that runs on multi-GPU systems. Based on data dependencies and data usage analysis, the application is transformed to decompose data and computation among the available GPUs. To reduce the data transfer overhead, computation-communication overlapping techniques are utilized. We tested our tool using two applications with different data transfer requirements, for the application with no data transfer requirements, a linear speedup is achieved, while for the application with data transfers, the computation-communication overlapping reduces the communication overhead by 40%.
dc.languageen_US
dc.publisherINSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS INC.
dc.sourceProceedings - 2015 Asia-Pacific Conference on Computer-Aided System Engineering, APCASE 2015
dc.subjectGpu
dc.subjectOpencl
dc.subjectProgram Transformation
dc.titleAutomatic Parallelization of GPU Applications Using OpenCL
dc.typeArticle


Este ítem pertenece a la siguiente institución