Mostrar o rexistro simple do ítem

dc.contributor.authorPuente-Castro, Alejandro
dc.contributor.authorRivero, Daniel
dc.contributor.authorPedrosa, Eurico
dc.contributor.authorPereira, Artur
dc.contributor.authorLau, Nuno
dc.contributor.authorFernández-Blanco, Enrique
dc.date.accessioned2023-12-11T08:53:41Z
dc.date.available2023-12-11T08:53:41Z
dc.date.issued2023
dc.identifier.citationPuente-Castro, A., Rivero, D., Pedrosa, E., Pereira, A., Lau, N., & Fernandez-Blanco, E. (2023). Q-Learning based system for Path Planning with Unmanned Aerial Vehicles swarms in obstacle environments. Expert Systems With Applications, 235, 121240.https://doi.org/10.1016/j.eswa.2023.121240es_ES
dc.identifier.urihttp://hdl.handle.net/2183/34437
dc.description.abstract[Abstract]: Path Planning methods for the autonomous control of Unmanned Aerial Vehicle (UAV) swarms are on the rise due to the numerous advantages they bring. There are increasingly more scenarios where autonomous control of multiple UAVs is required. Most of these scenarios involve a large number of obstacles, such as power lines or trees. Despite these challenges, there are also several advantages; if all UAVs can operate autonomously, personnel expenses can be reduced. Additionally, if their flight paths are optimized, energy consumption is reduced, leaving more battery time for other operations. In this paper, a Reinforcement Learning-based system is proposed to solve this problem in environments with obstacles by utilizing Q-Learning. This method allows a model, in this case, an Artificial Neural Network, to self-adjust by learning from its mistakes and successes. Regardless of the map’s size or the number of UAVs in the swarm, the goal of these paths is to ensure complete coverage of an area with fixed obstacles for tasks like field prospecting. Setting goals or having any prior information apart from the provided map is not required. During the experimentation phase, five maps of varying sizes were used, each with different obstacles and a varying number of UAVs. To evaluate the quality of the results, the number of actions taken by each UAV to complete the task in each experiment was considered. The results indicate that the system achieves solutions with fewer movements as the number of UAVs increases. An increasing number of UAVs on a map lead to solutions in fewer moves. The results have been compared, and a statistical significance analysis has been conducted on the proposed model’s outcomes, demonstrating its capabilities. Thus, it is shown that a two-layer Artificial Neural Network used to implement a Q-Learning algorithm is sufficient to operate on maps with obstacles.es_ES
dc.description.sponsorshipThis project was supported by the FCT - Foundation for Science and Technology, Portugal, in the context of the project [grant number UIDB/00127/2020], and also POCI 2020, in the context of the Germirrad project [grant number POCI-01-0247-FEDER-072237]. Also, the General Directorate of Culture, Education, and University Management of Xunta de Galicia [grant number ED431D 2017/16]. This work was also funded by the grant for the consolidation and structuring of competitive research units [grant number ED431C 2022/46] from the General Directorate of Culture, Education and University Management of Xunta de Galicia, and the CYTED network, Spain [grant number PCI2018_093284] funded by the Spanish Ministry of Innovation and Science. This project was also supported by the General Directorate of Culture, Education and University Management of Xunta de Galicia “PRACTICUM DIRECT” [grant number IN845D-2020/03].es_ES
dc.description.sponsorshipPortugal. Fundação para a Ciência e a Tecnologia; UIDB/00127/2020es_ES
dc.description.sponsorshipPortugal. Programa Operacional Competitividade e Internacionalização; POCI-01-0247-FEDER-072237es_ES
dc.description.sponsorshipXunta de Galicia; ED431D 2017/16es_ES
dc.description.sponsorshipXunta de Galicia; ED431C 2022/46es_ES
dc.description.sponsorshipXunta de Galicia; IN845D-2020/03es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.relationinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PCI2018_093284/ES/OBESIDAD Y DIABETES EN IBEROAMERICA: FACTORES DE RIESGO Y NUEVOS BIOMARCADORES PATOGENICOS Y PREDICTIVOSes_ES
dc.relation.urihttps://doi.org/10.1016/j.eswa.2023.121240es_ES
dc.rightsAttribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND)es_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subjectUAVes_ES
dc.subjectArtificial Neural Networkes_ES
dc.subjectReinforcement learninges_ES
dc.subjectPath Planninges_ES
dc.subjectObstaclees_ES
dc.subjectSwarmes_ES
dc.titleQ-Learning based system for Path Planning with Unmanned Aerial Vehicles swarms in obstacle environmentses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
UDC.journalTitleExpert Systems with Applicationses_ES
UDC.volume235es_ES
UDC.issue121240es_ES
dc.identifier.doi10.1016/j.eswa.2023.121240


Ficheiros no ítem

Thumbnail
Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem