Mostrar o rexistro simple do ítem

dc.contributor.authorDoncieux, Stephane
dc.contributor.authorFilliat, David
dc.contributor.authorDíaz-Rodríguez, Natalia
dc.contributor.authorHospedales, Timothy
dc.contributor.authorDuro, Richard J.
dc.contributor.authorConinx, Alexandre
dc.contributor.authorRoijers, Diederik M.
dc.contributor.authorBenoît, Girard
dc.contributor.authorPerrin, Nicolas
dc.contributor.authorSigaud, Oliver
dc.date.accessioned2024-06-18T15:50:56Z
dc.date.available2024-06-18T15:50:56Z
dc.date.issued2018
dc.identifier.citationDoncieux, S., Filliat, D., Díaz-Rodríguez, N., Hospedales, T., Duro, R., Coninx, A., et al. (2018). Open-Ended Learning: A Conceptual Framework Based on Representational Redescription. Front. Neurorobot. 12, 59. doi: 10.3389/fnbot.2018.00059es_ES
dc.identifier.issn1662-5218
dc.identifier.urihttp://hdl.handle.net/2183/37100
dc.description.abstract[Abstract]: Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an open-ended learning process, an agent or robot must solve an unbounded sequence of tasks that are not known in advance and the corresponding MDPs cannot be built at design time. This defines the main challenges of open-ended learning: how can the agent learn how to behave appropriately when the adequate states, actions and rewards representations are not given? In this paper, we propose a conceptual framework to address this question. We assume an agent endowed with low-level perception and action capabilities. This agent receives an external reward when it faces a task. It must discover the state and action representations that will let it cast the tasks as MDPs in order to solve them by RL. The relevance of the action or state representation is critical for the agent to learn efficiently. Considering that the agent starts with a low level, task-agnostic state and action spaces based on its low-level perception and action capabilities, we describe open-ended learning as the challenge of building the adequate representation of states and actions, i.e., of redescribing available representations. We suggest an iterative approach to this problem based on several successive Representational Redescription processes, and highlight the corresponding challenges in which intrinsic motivations play a key role.es_ES
dc.language.isoenges_ES
dc.publisherFrontiers Mediaes_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/640891es_ES
dc.relation.urihttps://doi.org/10.3389/FNBOT.2018.00059es_ES
dc.rightsAtribución 4.0 Internacionales_ES
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.subjectDevelopmental roboticses_ES
dc.subjectReinforcement learninges_ES
dc.subjectState representation learninges_ES
dc.subjectRepresentational redescriptiones_ES
dc.subjectActions and goalses_ES
dc.subjectSkillses_ES
dc.titleOpen-ended learning: a conceptual framework based on representational redescriptiones_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
UDC.journalTitleFrontiers in Neuroroboticses_ES
UDC.volume12es_ES
UDC.issueSepes_ES
dc.identifier.doihttps://doi.org/10.3389/FNBOT.2018.00059


Ficheiros no ítem

Thumbnail
Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem