Counterexample to theorems of Cox and Fine
Joseph Y. Halpern
aaai 1996
We advance a knowledge-based learning method that allows prior domain knowledge to be effectively utilized by machine learning systems. The domain knowledge is incorporated not into the learning algorithm itself but instead affects only the training data. The domain knowledge is used to explain and then transform the actual training examples into a more informative set of imaginary, or "phantom" examples. These phantom examples are added to the training set; the experienced examples are discarded. A new control policy is induced from the phantom training set. This policy is then exercised, yielding additional training points, and the process repeats. We investigate the performance of this method in a stylized air-hockey domain which demands a difficult nonlinear control policy. Our experiments show that, surprisingly, an accurate policy can be learned even if the domain theory is only imprecise and approximate. We advance an interpretation which indicates that the information available from a plausible qualitative domain theory is sufficient for robust successful learning. This interpretation is used to make a number of predictions which are tested in subsequent experiments. The outcomes confirm the interpretation and the robustness of the approach.
Joseph Y. Halpern
aaai 1996
Hagen Soltau, Lidia Mangu, et al.
ASRU 2011
Susan L. Spraragen
International Conference on Design and Emotion 2010
Hong-linh Truong, Maja Vukovic, et al.
ICDH 2024