Skip navigation
  •  Inicio
  • UDC 
    • Cómo depositar
    • Políticas del RUC
    • FAQ
    • Derechos de autor
    • Más información en INFOguías UDC
  • Listar 
    • Comunidades
    • Buscar por:
    • Fecha de publicación
    • Autor
    • Título
    • Materia
  • Ayuda
    • español
    • Gallegan
    • English
  • Acceder
  •  Español 
    • Español
    • Galego
    • English
  
Ver ítem 
  •   RUC
  • Facultade de Informática
  • Investigación (FIC)
  • Ver ítem
  •   RUC
  • Facultade de Informática
  • Investigación (FIC)
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Multi-Adaptive Optimization for multi-task learning with deep neural networks

Thumbnail
Ver/Abrir
Hervella_AlvaroS_2024_Multi_Adaptive_Optimization_for_multi_task_learning_with_deep_neural_networks.pdf (1.810Mb)
Use este enlace para citar
http://hdl.handle.net/2183/36191
Atribución-NoComercial-SinDerivadas 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial-SinDerivadas 3.0 España
Colecciones
  • Investigación (FIC) [1678]
Metadatos
Mostrar el registro completo del ítem
Título
Multi-Adaptive Optimization for multi-task learning with deep neural networks
Autor(es)
Hervella, Álvaro S.
Rouco, J.
Novo Buján, Jorge
Ortega Hortas, Marcos
Fecha
2024-02
Cita bibliográfica
Á. S. Hervella, J. Rouco, J. Novo, and M. Ortega, "Multi-Adaptive Optimization for multi-task learning with deep neural networks ", Neural Networks, Vol. 170, Pp. 254-265, Feb. 2024, doi: 10.1016/j.neunet.2023.11.038
Resumen
[Abstract]: Multi-task learning is a promising paradigm to leverage task interrelations during the training of deep neural networks. A key challenge in the training of multi-task networks is to adequately balance the complementary supervisory signals of multiple tasks. In that regard, although several task-balancing approaches have been proposed, they are usually limited by the use of per-task weighting schemes and do not completely address the uneven contribution of the different tasks to the network training. In contrast to classical approaches, we propose a novel Multi-Adaptive Optimization (MAO) strategy that dynamically adjusts the contribution of each task to the training of each individual parameter in the network. This automatically produces a balanced learning across tasks and across parameters, throughout the whole training and for any number of tasks. To validate our proposal, we perform comparative experiments on real-world datasets for computer vision, considering different experimental settings. These experiments allow us to analyze the performance obtained in several multi-task scenarios along with the learning balance across tasks, network layers and training steps. The results demonstrate that MAO outperforms previous task-balancing alternatives. Additionally, the performed analyses provide insights that allow us to comprehend the advantages of this novel approach for multi-task learning.
Palabras clave
Computer vision
Deep learning
Gradient descent
Multi-task learning
Neural networks
Optimization
 
Descripción
Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG
Versión del editor
https://doi.org/10.1016/j.neunet.2023.11.038
Derechos
Atribución-NoComercial-SinDerivadas 3.0 España

Listar

Todo RUCComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasGrupo de InvestigaciónTitulaciónEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasGrupo de InvestigaciónTitulación

Mi cuenta

AccederRegistro

Estadísticas

Ver Estadísticas de uso
Sherpa
OpenArchives
OAIster
Scholar Google
UNIVERSIDADE DA CORUÑA. Servizo de Biblioteca.    DSpace Software Copyright © 2002-2013 Duraspace - Sugerencias