Counterexample to theorems of Cox and Fine
Joseph Y. Halpern
aaai 1996
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learning algorithms to the case of randomized algorithms. We give formal definitions of stability for randomized algorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability. The setup we develop for this purpose can be also used for generally studying randomized learning algorithms. We then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learning algorithms.
Joseph Y. Halpern
aaai 1996
Amy Lin, Sujit Roy, et al.
AGU 2024
Kellen Cheng, Anna Lisa Gentile, et al.
EMNLP 2024
Albert Atserias, Anuj Dawar, et al.
Journal of the ACM