Arnold L. Rosenberg
Journal of the ACM
The results obtained by Pollack and Blair substantially underperform my 1992 TD Learning results. This is shown by directly benchmarking the 1992 TD nets against Pubeval. A plausible hypothesis for this underperformance is that, unlike TD learning, the hillclimbing algorithm fails to capture nonlinear structure inherent in the problem, and despite the presence of hidden units, only obtains a linear approximation to the optimal policy for backgammon. Two lines of evidence supporting this hypothesis are discussed, the first coming from the structure of the Pubeval benchmark program, and the second coming from experiments replicating the Pollack and Blair results. © 1998 Kluwer Academic Publishers.
Arnold L. Rosenberg
Journal of the ACM
Joseph Y. Halpern, Yoram Moses
Journal of the ACM
Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A
Arthur Nádas
IEEE Transactions on Neural Networks