Skip navigation
  •  Home
  • UDC 
    • Getting started
    • RUC Policies
    • FAQ
    • FAQ on Copyright
    • More information at INFOguias UDC
  • Browse 
    • Communities
    • Browse by:
    • Issue Date
    • Author
    • Title
    • Subject
  • Help
    • español
    • Gallegan
    • English
  • Login
  •  English 
    • Español
    • Galego
    • English
  
View Item 
  •   DSpace Home
  • Facultade de Informática
  • Investigación (FIC)
  • View Item
  •   DSpace Home
  • Facultade de Informática
  • Investigación (FIC)
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Multi-Adaptive Optimization for multi-task learning with deep neural networks

Thumbnail
View/Open
Hervella_AlvaroS_2024_Multi_Adaptive_Optimization_for_multi_task_learning_with_deep_neural_networks.pdf (1.810Mb)
Use this link to cite
http://hdl.handle.net/2183/36191
Atribución-NoComercial-SinDerivadas 3.0 España
Except where otherwise noted, this item's license is described as Atribución-NoComercial-SinDerivadas 3.0 España
Collections
  • Investigación (FIC) [1679]
Metadata
Show full item record
Title
Multi-Adaptive Optimization for multi-task learning with deep neural networks
Author(s)
Hervella, Álvaro S.
Rouco, J.
Novo Buján, Jorge
Ortega Hortas, Marcos
Date
2024-02
Citation
Á. S. Hervella, J. Rouco, J. Novo, and M. Ortega, "Multi-Adaptive Optimization for multi-task learning with deep neural networks ", Neural Networks, Vol. 170, Pp. 254-265, Feb. 2024, doi: 10.1016/j.neunet.2023.11.038
Abstract
[Abstract]: Multi-task learning is a promising paradigm to leverage task interrelations during the training of deep neural networks. A key challenge in the training of multi-task networks is to adequately balance the complementary supervisory signals of multiple tasks. In that regard, although several task-balancing approaches have been proposed, they are usually limited by the use of per-task weighting schemes and do not completely address the uneven contribution of the different tasks to the network training. In contrast to classical approaches, we propose a novel Multi-Adaptive Optimization (MAO) strategy that dynamically adjusts the contribution of each task to the training of each individual parameter in the network. This automatically produces a balanced learning across tasks and across parameters, throughout the whole training and for any number of tasks. To validate our proposal, we perform comparative experiments on real-world datasets for computer vision, considering different experimental settings. These experiments allow us to analyze the performance obtained in several multi-task scenarios along with the learning balance across tasks, network layers and training steps. The results demonstrate that MAO outperforms previous task-balancing alternatives. Additionally, the performed analyses provide insights that allow us to comprehend the advantages of this novel approach for multi-task learning.
Keywords
Computer vision
Deep learning
Gradient descent
Multi-task learning
Neural networks
Optimization
 
Description
Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG
Editor version
https://doi.org/10.1016/j.neunet.2023.11.038
Rights
Atribución-NoComercial-SinDerivadas 3.0 España

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsResearch GroupAcademic DegreeThis CollectionBy Issue DateAuthorsTitlesSubjectsResearch GroupAcademic Degree

My Account

LoginRegister

Statistics

View Usage Statistics
Sherpa
OpenArchives
OAIster
Scholar Google
UNIVERSIDADE DA CORUÑA. Servizo de Biblioteca.    DSpace Software Copyright © 2002-2013 Duraspace - Send Feedback