Mostrar o rexistro simple do ítem

dc.contributor.authorLópez Castro, Roberto
dc.contributor.authorAndrade, Diego
dc.contributor.authorFraguela, Basilio B.
dc.date.accessioned2024-06-05T14:30:12Z
dc.date.available2024-06-05T14:30:12Z
dc.date.issued2024-05
dc.identifier.citationR. L. Castro, D. Andrade and B. B. Fraguela, "STuning-DL: Model-Driven Autotuning of Sparse GPU Kernels for Deep Learning," in IEEE Access, vol. 12, pp. 70581-70599, 2024, doi: 10.1109/ACCESS.2024.3402326.es_ES
dc.identifier.issn2169-3536
dc.identifier.urihttp://hdl.handle.net/2183/36810
dc.description.abstract[Abstract]: The relentless growth of modern Machine Learning models has spurred the adoption of sparsification techniques to simplify their architectures and reduce the computational demands. Network pruning has demonstrated success in maintaining original network accuracy while shedding significant portions of the original weights. However, leveraging this sparsity efficiently remains challenging due to computational irregularities, particularly in GPU kernels. A new trend of template-based GPU kernels for semi-structured sparsity shows promise in efficiency but lacks autotuning capabilities to adapt to input dynamics, often underperforming in scenarios where they have not been meticulously hand-tuned. We present STuning-DL, the first pruning-aware autotuner for third-party template-based implementations enabling efficient optimization of sparse kernels for Deep Learning, spanning from high-level aspects (CUDA C++ level) down to GPU-native instructions specifics (assembly-level). STuning-DL tunes and optimizes at run-time sparse kernels’ performance for each input problem, yielding speedups of up to 5.42× on NVIDIA T4-16GB and up to 3.6× on NVIDIA A100-40GB GPU in sparse matrices from real world models compared to existing heuristics from sparse libraries like cuSparse and cuSparseLt.es_ES
dc.description.sponsorshipThis work was supported by grant PID2022-136435NB-I00, funded by MCIN/AEI/10.13039/501100011033 and by ‘‘ERDF A way of making Europe’’, EU; also by Xunta de Galicia under the Consolidation Programme of Competitive Reference Groups, ref. ED431C 2021/30. The work of Roberto L. Castro was supported by a predoctoral grant from the Ministry of Science, Innovation and Universities, ref. FPU19/03974.es_ES
dc.description.sponsorshipXunta de Galicia; ED431C 2021/30es_ES
dc.language.isoenges_ES
dc.publisherInstitute of Electrical and Electronics Engineerses_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023/PID2022-136435NB-I00/ES/ARQUITECTURAS, FRAMEWORKS Y APLICACIONES DE LA COMPUTACION DE ALTAS PRESTACIONESes_ES
dc.relationinfo:eu-repo/grantAgreement//Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/FPU19%2F03974/ES/es_ES
dc.relation.urihttps://doi.org/10.1109/ACCESS.2024.3402326es_ES
dc.rightsAtribución 4.0 Internacionales_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectCUDAes_ES
dc.subjectGPUes_ES
dc.subjectLearning-based predictive modeles_ES
dc.subjectNetwork pruninges_ES
dc.subjectSparse computationes_ES
dc.subjectSpMMes_ES
dc.subjectTensor Corees_ES
dc.titleSTuning-DL: Model-Driven Autotuning of Sparse GPU Kernels for Deep Learninges_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
UDC.journalTitleIEEE Accesses_ES
UDC.volume12es_ES
UDC.startPage70581es_ES
UDC.endPage70599es_ES
dc.identifier.doi10.1109/ACCESS.2024.3402326


Ficheiros no ítem

Thumbnail
Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem