Control Task for Reinforcement Learning with Known Optimal Solution for Discrete and Continuous Actions

HTML  Download Download as PDF (Size: 2237KB)  PP. 28-41  
DOI: 10.4236/jilsa.2009.11002    5,328 Downloads   9,966 Views  Citations

Affiliation(s)

.

ABSTRACT

The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions drawn from continuous sets. This paper describes a simple control task called direction finder and its known optimal solution for both discrete and continuous actions. It allows for comparison of RL solution methods based on their value functions. In order to solve the control task for continuous actions, a simple idea for generalising them by means of feature vectors is presented. The resulting algorithm is applied using different choices of feature calculations. For comparing their performance a simple measure is introduced

Share and Cite:

M. ROTTGER and A. LIEHR, "Control Task for Reinforcement Learning with Known Optimal Solution for Discrete and Continuous Actions," Journal of Intelligent Learning Systems and Applications, Vol. 1 No. 1, 2009, pp. 28-41. doi: 10.4236/jilsa.2009.11002.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.