Abstract
Suppose an agent builds a policy that satisfactorily solves a decision problem; suppose further that some aspects of this policy are abstracted and used as starting point in a new, different decision problem. How can the agent accrue the benefits of the abstract policy in the new concrete problem? In this paper we propose a framework for simultaneous reinforcement learning, where the abstract policy helps start up the policy for the concrete problem, and both policies are refined through exploration. We report experiments that demonstrate that our framework is effective in speeding up policy construction for practical problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Original language | American English |
---|---|
Pages | 82-89 |
Number of pages | 8 |
State | Published - 1 Dec 2011 |
Externally published | Yes |
Event | SARA 2011 - Proceedings of the 9th Symposium on Abstraction, Reformulation, and Approximation - Duration: 1 Dec 2011 → … |
Conference
Conference | SARA 2011 - Proceedings of the 9th Symposium on Abstraction, Reformulation, and Approximation |
---|---|
Period | 1/12/11 → … |