Donald Samuels, Ian Stobert
SPIE Photomask Technology + EUV Lithography 2007
The following optimality principle is established for finite undiscounted or discounted Markov decision processes: If a policy is (gain, bias, or discounted) optimal in one state, it is also optimal for all states reachable from this state using this policy. The optimality principle is used constructively to demonstrate the existence of a policy that is optimal in every state, and then to derive the coupled functional equations satisfied by the optimal return vectors. This reverses the usual sequence, where one first establishes (via policy iteration or linear programming) the solvability of the coupled functional equations, and then shows that the solution is indeed the optimal return vector and that the maximizing policy for the functional equations is optimal for every state. © 1976.
Donald Samuels, Ian Stobert
SPIE Photomask Technology + EUV Lithography 2007
J. LaRue, C. Ting
Proceedings of SPIE 1989
David Cash, Dennis Hofheinz, et al.
Journal of Cryptology
I.K. Pour, D.J. Krajnovich, et al.
SPIE Optical Materials for High Average Power Lasers 1992