Multi-GPU Development of a Neural Networks Based Reconstructor for Adaptive Optics
Use this link to cite
http://hdl.handle.net/2183/37474
Except where otherwise noted, this item's license is described as Creative Commons Attribution License https://creativecommons.org/licenses/by/4.0/
Collections
- GI-CTC - Artigos [84]
Metadata
Show full item recordTitle
Multi-GPU Development of a Neural Networks Based Reconstructor for Adaptive OpticsAuthor(s)
Date
2018Citation
González-Gutiérrez, Carlos, Sánchez-Rodríguez, María Luisa, Calvo-Rolle, José Luis, de Cos Juez, Francisco Javier, Multi-GPU Development of a Neural Networks Based Reconstructor for Adaptive Optics, Complexity, 2018, 5348265. https://doi.org/10.1155/2018/5348265
Abstract
[Abstract] Aberrations introduced by the atmospheric turbulence in large telescopes are compensated using adaptive optics systems, where the use of deformable mirrors and multiple sensors relies on complex control systems. Recently, the development of larger scales of telescopes as the E-ELT or TMT has created a computational challenge due to the increasing complexity of the new adaptive optics systems. The Complex Atmospheric Reconstructor based on Machine Learning (CARMEN) is an algorithm based on artificial neural networks, designed to compensate the atmospheric turbulence. During recent years, the use of GPUs has been proved to be a great solution to speed up the learning process of neural networks, and different frameworks have been created to ease their development. The implementation of CARMEN in different Multi-GPU frameworks is presented in this paper, along with its development in a language originally developed for GPU, like CUDA. This implementation offers the best response for all the presented cases, although its advantage of using more than one GPU occurs only in large networks.
Editor version
Rights
Creative Commons Attribution License https://creativecommons.org/licenses/by/4.0/
ISSN
1099-0526