Michael Muller, Anna Kantosalo, et al.
CHI 2024
In this study, we discuss a baseline function for the estimation of a natural policy gradient with respect to variance, and demonstrate a condition in which an optimal baseline function that reduces the variance is equivalent to the state value function. However, outside of this condition, the state value could be considerably different from the optimal baseline. For such cases, an extended version of the NTD algorithm is proposed, where an auxiliary function is estimated to adjust the baseline, being state value estimates in the original NTD version, to the optimal baseline. The proposed algorithm is applied to simple MDPs and a challenging pendulum swing-up problem. © International Symposium on Artificial Life and Robotics (ISAROB). 2008.
Michael Muller, Anna Kantosalo, et al.
CHI 2024
Els van Herreweghen, Uta Wille
USENIX Workshop on Smartcard Technology 1999
Vijay K. Naik, Sanjeev K. Setia, et al.
Journal of Parallel and Distributed Computing
Bing Zhang, Mikio Takeuchi, et al.
NAACL 2025