AAAI-17 Workshop: Human-Aware Artificial Intelligence (HAAI-17) - overview


As AI techniques and systems come into increasing contact with humans, and into the public consciousness at large, various research issues surrounding such interactions are now coming to the fore. Specifically, a key movement that is underway in the AI community and the world of technology at large concerns the notion of humans and machines (AI systems) teaming up together to understand data and take decisions.

The key premise of this workshop is based on the idea that augmented intelligence – i.e., teams and systems that combine the skills of humans and AI techniques – can achieve better performance than either alone. However, in order to create such systems with augmented intelligence, humans must be accommodated as first-class citizens in the decision-making loop of existing AI systems. Far too often, traditional AI systems have tended to exclude humans (and the problems that accompany interaction with them) and have instead focused on producing “optimal” artifacts that stand no significant chance to working in the real world.

In order to address this issue and produce truly human-aware augmented intelligence, systems must try to solve the interaction issues that accompany each unique application domain. These interaction issues may broadly be divided into extraction (or interpretation) challenges, and presentation (or steering) challenges. The former deals with understanding human input, whether that be in the form of knowledge, or in the form of specific directives and goals to achieve; the latter deals with questions of how to present the system’s outputs to the team and solicit feedback. However, the specific interaction issues may differ significantly depending on the application being addressed. For example, the interaction issues involved in automating parts of the data science pipeline for data scientists are quite different from those involved in intelligent control of crowdsourcing applications, both of which differ from decision-making systems for security and law enforcement.

Given the gamut of applications and associated interaction issues, a little more structure is required for the study of human-aware augmented intelligence systems. We thus propose problem pillars for human-aware AI and augmented intelligence. Contributions to the workshop (both talks as well as manuscripts) will be solicited along the following problem pillars:

1. Explainability of decisions
2. Interpretability of the decision process
3. Efficient and time-sensitive context transfer
4. Division of labor and skills
5. Legal and ethical issues

The aim of this workshop is to bring together the work of researchers who are interested in advancing the state-of-the-art not merely in their specific sub-field of AI, but are also willing to engage in technically directed discussions on what is missing currently from their work that is needed to turn it into an augmented intelligence system that can gainfully interact with humans and the world at large. Work that is invited to the workshop – whether in the form of papers, or talks – will be expected to address this central question, and make some effort towards addressing one or more of the problem pillars outlined above.

The workshop will build on and continue themes explored in quite a few prior, recent workshops at the two major pan-AI conferences