AAAI 2019 Reasoning and Complex QA Workshop - overview

AAAI 2019 Reasoning for Complex Question Answering Workshop (WS13)

Date: January 28, 2019
Hilton Hawaiian Village, Honolulu

Workshop Poster 

Workshop Information

Question Answering (QA) has become a crucial application problem in evaluating the progress of AI systems in the realm of natural language processing and understanding, and to measure the progress of machine intelligence in general. The computational linguistics communities (ACL, NAACL, EMNLP et al.) have devoted significant attention to the general problem of machine reading and question answering, as evidenced by the emergence of strong technical contributions and challenge datasets such as SQuAD. However, most of these advances have focused on “shallow” QA tasks that can be tackled very effectively by existing retrieval-based techniques. Instead of measuring the comprehension and understanding of the QA systems in question, these tasks test merely the capability of a technique to “attend” or focus attention on specific words and pieces of text.

To better align progress in the field of QA with the expectations that we have of human performance and behavior when solving such tasks, a new class of questions – known as “complex” or “challenge” questions – has been proposed. The definition of complex questions varies, but they can most generally be thought of as instances that require intelligent behavior and reasoning on the part of an agent to solve. Such reasoning may also include the systematic retrieval of knowledge from semi-structured and structured sources such as documents, webpages, tables, knowledge graphs etc.; and the exploitation of domain models in generalized representations that are learned from available data. As the knowledge as well as the questions themselves become more complex and specialized, the process of understanding and answering these questions comes to resemble human expertise in specialized domains. Current examples of such complex question answering (CQA) tasks – where humans presently rule the roost – include customer service and support, standardized testing in education, and domain-specific consultancy services such as legal advice, etc.

The main aim of this workshop is to bring together experts from the computational linguistics (CL) and AI communities to: (1) catalyze progress on the CQA problem, and create a vibrant test-bed of problems for various AI sub-fields; and (2) present a generalized task that can act as a harbinger of progress in AI.

We solicit submissions in the form of papers (short and long), posters, demos, panel ideas, and other suggestions. A submission site will be notified soon; in the meantime, suggestions on topics or programs to include in the workshop may be emailed to Kartik Talamadupula (


Organizer: Kartik Talamadupula, IBM Research AI (

Please see the Organization Details page for a full list of researchers involved with the workshop.

Submissions Due: November 5, 2018 (UTC -12, Anywhere on Earth)

Submission Website (Easychair):

Please see the Call for Papers for more submission details

Submission Policy: Due to multiple questions regarding this, please read the following regarding our submission policy. The workshop is non-archival: this means that you can submit papers that have been submitted to other conferences previously (including AAAI) irrespective of the result. Submission to the workshop also does not preclude further submission of your papers to future conferences (unless otherwise prohibited by the other conference).

Notification: Nov 26, 2018