AAAI 2019 Reasoning and Complex QA Workshop - Call for Papers

Reasoning for Complex QA (AAAI-19)

AAAI-19 Workshop on Reasoning for Complex Question-Answering (RCQA), held at the Third-Third AAAI Conference on Artificial Intelligence (AAAI-19)


January 28, 2019
Honolulu, Hawaii, USA



Submissions Due: November 5, 2018 (UTC -12, Anywhere on Earth)
Submission to the workshop is now closed

Submission Website (Easychair):

Notification: Nov 26, 2018

Question Answering (QA) has become a crucial application problem in evaluating the progress of AI systems in the realm of natural language processing and understanding, and to measure the progress of machine intelligence in general. The computational linguistics communities (ACL, NAACL, EMNLP et al.) have devoted significant attention to the general problem of machine reading and question answering, as evidenced by the emergence of strong technical contributions and challenge datasets such as SQuAD. However, most of these advances have focused on “shallow” QA tasks that can be tackled very effectively by existing retrieval-based techniques. Instead of measuring the comprehension and understanding of the QA systems in question, these tasks test merely the capability of a technique to “attend” or focus attention on specific words and pieces of text.

To better align progress in the field of QA with the expectations that we have of human performance and behavior when solving such tasks, a new class of questions – known as “complex” or “challenge” questions – has been proposed. The definition of complex questions varies, but they can most generally be thought of as instances that require intelligent behavior and reasoning on the part of an agent to solve. Such reasoning may also include the systematic retrieval of knowledge from semi-structured and structured sources such as documents, webpages, tables, knowledge graphs etc.; and the exploitation of domain models in generalized representations that are learned from available data. As the knowledge as well as the questions themselves become more complex and specialized, the process of understanding and answering these questions comes to resemble human expertise in specialized domains. Current examples of such complex question answering (CQA) tasks – where humans presently rule the roost – include customer service and support, standardized testing in education, and domain-specific consultancy services such as legal advice, etc.

The main aim of this workshop is to bring together experts from the computational linguistics (CL) and AI communities to: (1) catalyze progress on the CQA problem, and create a vibrant test-bed of problems for various AI sub-fields; and (2) present a generalized task that can act as a harbinger of progress in AI.

Contributions are welcome in the following categories:

• Tasks: New kinds of CQA tasks from various real-world domains
• Datasets: Creation of CQA datasets and challenges that can be used to measure progress
• Metrics: Measures that indicate progress in CQA tasks
• Knowledge: Representation and utilization of external knowledge, as well as acquisition and learning of models from both experts and data
• Classification Schemes: To distinguish different kinds of questions, and the reasoning required to answer them
• Adoption of Existing AI Techniques: Adapting the state-of-the-art in various AI sub-fields to the CQA problem
• Transferability Across Domains: Evaluating the flexibility of proposed techniques and representations

Contributions on other topics that are related and relevant to the workshop's theme are also welcomed.



We welcome submissions describing work that is relevant to the workshop and/or the topics above, or proposals for discussion topics that will be of interest to at workshop.

Submissions are accepted in PDF format, following the AAAI-19 formatting guidelines. Submissions may be no longer than 8 pages in length, with the last page (page 8) devoted to only references and figures.

Submissions are not anonymous; please include author information on your submission, and de-anonymize any references to past work that the submissions builds upon.

Submissions are accepted via Easychair, at the following URL:

Submissions to the workshop will be lightly reviewed for their relevance to the workshop topics, scientific contribution, and novelty. The workshop is non-archival.

Resubmission Policy: Due to multiple questions regarding this, please read the following regarding our submission policy. The workshop is non-archival: this means that you can submit papers that have been submitted to other conferences previously (including AAAI) irrespective of the result. Submission to the workshop also does not preclude further submission of your papers to future conferences (unless otherwise prohibited by that other conference).

Questions about submissions or the workshop in general can be emailed to Kartik Talamadupula, at