38
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Reinforcement Learning: A Survey

      , ,
      Journal of Artificial Intelligence Research
      AI Access Foundation

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.

          Related collections

          Author and article information

          Journal
          Journal of Artificial Intelligence Research
          jair
          AI Access Foundation
          1076-9757
          January 01 1996
          May 01 1996
          : 4
          : 237-285
          Article
          10.1613/jair.301
          9413d7e6-cbb9-4ec7-99ae-8ff9b96a8945
          © 1996
          History

          Comments

          Comment on this article