C.A. Micchelli, W.L. Miranker
Journal of the ACM
Within the machine learning community, the notion of theory of mind is commonly understood as an emergent property of models that are able to make predictions about the behavior of others. Within the HCI community, the notion of ``mental models’' incorporates information about the knowledge, skills, and intentions of an AI agent. In this technical position paper, we synthesize these two views and offer a single point of view on theory of mind (MToM): what it is and how it can be achieved between one (or more) humans and one (or more) AI agents. Specifically, we argue that uni-directional, first-order models (e.g., a human's mental model of an AI agent) are not enough to achieve MToM; rather, at least second-order models (e.g., an AI agent explicitly models a human's understanding of the AI's knowledge and skills in addition to the human's knowledge and skills) are required to fully see the benefits of MToM. Our analysis aims to provide a roadmap for the design of MToM within human-AI collaborative scenarios and identifies the complexities of its implementation and evaluation.
C.A. Micchelli, W.L. Miranker
Journal of the ACM
Saurabh Paul, Christos Boutsidis, et al.
JMLR
Joxan Jaffar
Journal of the ACM
Cristina Cornelio, Judy Goldsmith, et al.
JAIR