Open-ended learning: a conceptual framework based on representational redescription
![Thumbnail](/dspace/bitstream/handle/2183/37100/Doncieux_Stephane_2018_Open-ended_learning_conceptual_framework_based_representational_redescription.pdf.jpg?sequence=6&isAllowed=y)
Use este enlace para citar
http://hdl.handle.net/2183/37100
A non ser que se indique outra cousa, a licenza do ítem descríbese como Atribución 4.0 Internacional
Coleccións
- GII-Artigos [19]
- OpenAIRE [332]
Metadatos
Mostrar o rexistro completo do ítemTítulo
Open-ended learning: a conceptual framework based on representational redescriptionAutor(es)
Data
2018Cita bibliográfica
Doncieux, S., Filliat, D., Díaz-Rodríguez, N., Hospedales, T., Duro, R., Coninx, A., et al. (2018). Open-Ended Learning: A Conceptual Framework Based on Representational Redescription. Front. Neurorobot. 12, 59. doi: 10.3389/fnbot.2018.00059
Resumo
[Abstract]: Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an open-ended learning process, an agent or robot must solve an unbounded sequence of tasks that are not known in advance and the corresponding MDPs cannot be built at design time. This defines the main challenges of open-ended learning: how can the agent learn how to behave appropriately when the adequate states, actions and rewards representations are not given? In this paper, we propose a conceptual framework to address this question. We assume an agent endowed with low-level perception and action capabilities. This agent receives an external reward when it faces a task. It must discover the state and action representations that will let it cast the tasks as MDPs in order to solve them by RL. The relevance of the action or state representation is critical for the agent to learn efficiently. Considering that the agent starts with a low level, task-agnostic state and action spaces based on its low-level perception and action capabilities, we describe open-ended learning as the challenge of building the adequate representation of states and actions, i.e., of redescribing available representations. We suggest an iterative approach to this problem based on several successive Representational Redescription processes, and highlight the corresponding challenges in which intrinsic motivations play a key role.
Palabras chave
Developmental robotics
Reinforcement learning
State representation learning
Representational redescription
Actions and goals
Skills
Reinforcement learning
State representation learning
Representational redescription
Actions and goals
Skills
Versión do editor
Dereitos
Atribución 4.0 Internacional
ISSN
1662-5218