Paul J. Steinhardt, P. Chaudhari
Journal of Computational Physics
The following optimality principle is established for finite undiscounted or discounted Markov decision processes: If a policy is (gain, bias, or discounted) optimal in one state, it is also optimal for all states reachable from this state using this policy. The optimality principle is used constructively to demonstrate the existence of a policy that is optimal in every state, and then to derive the coupled functional equations satisfied by the optimal return vectors. This reverses the usual sequence, where one first establishes (via policy iteration or linear programming) the solvability of the coupled functional equations, and then shows that the solution is indeed the optimal return vector and that the maximizing policy for the functional equations is optimal for every state. © 1976.
Paul J. Steinhardt, P. Chaudhari
Journal of Computational Physics
W.C. Tang, H. Rosen, et al.
SPIE Optics, Electro-Optics, and Laser Applications in Science and Engineering 1991
Charles Micchelli
Journal of Approximation Theory
Charles A Micchelli
Journal of Approximation Theory