Use this link to cite:
http://hdl.handle.net/2183/26686 Implementación en CUDA dun método para realizar a operación de convolución en lotes
Loading...
Identifiers
Publication date
Authors
Aguado Couselo, Sara
Advisors
Other responsabilities
Enxeñaría informática, Grao en
Journal Title
Bibliographic citation
Type of academic work
Academic degree
Abstract
[Resumo]
Nos últimos anos, as plataformas heteroxéneas, tales como as tarxetas gráficas (GPU), tiveron
un gran auxe na resolución de problemas en diversos ámbitos. A realización de operacións alxébricas
por lotes xa foi explorada con éxito no pasado, como forma de mellorar o rendemento
desta clase de operacións. Non obstante, existen diversas formas de realizalo. Algunhas intentan
buscar un emprazamento óptimo das estruturas de datos en memoria, de forma que
favoreza as características da plataforma na que o código será executado. Outras tratan de realizar
un reparto do traballo que aumente a reutilización dos datos procesados por un mesmo
fío. O proxecto explora todas estas estratexias, no marco dunha implementación que emprega
CUDA para executar a operación de convolución por lotes. Esta operación alxébrica, ademais,
é a que ocupa un maior tempo de execución no adestramento de redes de aprendizaxe profunda.
Polo tanto, analizaremos o rendemento da implementación tanto de forma illada coma
no contexto das redes de aprendizaxe profunda.
[Abstract] In recent years, heterogeneous platforms (e.g., Graphical Processing Units), had a great boom solving problems in different fields. Batch algebraic operations have been successfully explored in the past as a way to improve performance. However, there are several ways to approach it. Some of them try to find an optimal location of the data structures in memory, in a way that favors the characteristics of the platform where the code is going to be executed. Others try to make a division of work that increases the reuse of data processed by the same thread. This project explores all of these strategies, as part of an implementation using CUDA to run the batched convolution operation. This algebraic operation is also the longest running operation in deep learning network training. Therefore, we will analyze implementation performance both in isolation and in the context of deep learning networks.
[Abstract] In recent years, heterogeneous platforms (e.g., Graphical Processing Units), had a great boom solving problems in different fields. Batch algebraic operations have been successfully explored in the past as a way to improve performance. However, there are several ways to approach it. Some of them try to find an optimal location of the data structures in memory, in a way that favors the characteristics of the platform where the code is going to be executed. Others try to make a division of work that increases the reuse of data processed by the same thread. This project explores all of these strategies, as part of an implementation using CUDA to run the batched convolution operation. This algebraic operation is also the longest running operation in deep learning network training. Therefore, we will analyze implementation performance both in isolation and in the context of deep learning networks.
Description
Editor version
Rights
Atribución-NoComercial-SinDerivadas 3.0 España







