Editorial: Special Section on Reinforcement Learning and Approximate Dynamic Programming ()
Share and Cite:
X. Xu, "Editorial: Special Section on Reinforcement Learning and Approximate Dynamic Programming,"
Journal of Intelligent Learning Systems and Applications, Vol. 2 No. 2, 2010, pp. 55-56. doi:
10.4236/jilsa.2010.22008.
Conflicts of Interest
The authors declare no conflicts of interest.
References
[1]
|
F. Y. Wang, H. G. Zhang and D. R. Liu, “Adaptive Dy-namic Programming: An Introduction,” IEEE Computa-tional Intelligence Magazine, May 2009, pp. 39-47.
|
[2]
|
W. B. Powell, “Approximate Dynamic Programming: Solving the Curses of Dimensionality,” Wiley, Princeton,NJ, 2007.
|
[3]
|
R. S. Sutton and A. G. Barto, “Reinforcement Learning: an Introduction,” MIT Press, Cambridge, MA, 1998.
|
[4]
|
D. P. Bertsekas and J. N. Tsitsiklis, “Neuro-Dynamic Programming. Belmont,” Athena Scientific, MA, 1996.
|
[5]
|
M. A. Wiering, “Self-play and Using an Expert to Learn to Play Backgammon with Temporal Difference Learn-ing,” Journal of Intelligent Learning Systems and Appli-cations, Vol. 2, 2010, pp. 55-66.
|
[6]
|
J. H. Zaragoza and E. F. Morales, “Relational Rein-forcement Learning with Continuous Actions by Com-bining Behavioural Cloning and Locally Weighted Re-gression,” Journal of Intelligent Learning Systems and Applications, Vol. 2, 2010, pp. 67-77.
|
[7]
|
D. Goswami and P. Jiang, “Experience Based Learning Controller,” Journal of Intelligent Learning Systems and Applications, Vol. 2, 2010, pp. 78-83.
|