PRL2020 - overview

ICAPS 2020 Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL)
While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluation. The reinforcement learning community has mostly relied on approximate dynamic programming and Monte-Carlo tree search as its workhorses for planning, while the field of planning has developed a diverse set of representational formalisms and scalable algorithms that are currently underexplored in learning approaches. Further, the planning community could benefit from the tools and algorithms developed by the machine learning community, for instance to automate the generation of planning problem descriptions.
The purpose of this workshop is to encourage discussion and collaboration between the communities of planning and learning,
and in particular reinforcement learning. This workshop aims to bridge the gap between the AI Planning and Reinforcement Learning communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration
across the fields. We solicit interest from agents and general AI researchers that work in the intersection of planning and
learning, in particular those that focus on intelligent decision making. As such, the joint workshop program is an excellent
opportunity to gather a large and diverse group of interested researchers.


Workshop topics:
The workshop solicits work at the intersection of the fields of
machine learning and planning. We also solicit work solely in one
area that can influence advances in the other so long as the
connections are clearly articulated in the submission.
Submissions are invited for topics on, but not limited to:
  • Multi-agent planning and learning
  • Robust planning in uncertain (learned) models
  • Adaptive Monte Carlo planning
  • Learning search heuristics for planner guidance
  • Reinforcement learning (model-based, Bayesian, deep, etc.)
  • Model representation and learning for planning
  • Theoretical aspects of planning and reinforcement learning
  • Learning and planning competition(s)
  • Applications of planning and learning

Additional questions of interest include, but not limited to:
  • How do concepts from each of these fields translate to the other?
  • How to define what constitutes a solution and how to evaluate its quality and guarantees? What is the relationship between solving a planning problem and an RL trial? How to perform a meaningful comparison of different methods?
  • What is the impact of whether the problem is expected to be repeatedly solved under different initial conditions in a practical application? What are the interesting scenarios about the distribution of behaviours in practice? Should RL methods care about single instance/trial performance? Should planning methods care about the distribution performance over a set of trials, even if a goal is not always achieved?
  • How to evaluate and compare the computational efficiency of various ML/symbolic/hybrid methods? Is the methodology from the learning track of the IPC relevant for RL methods? How can planning benefit from the methodologies used for evaluating RL/DeepRL?
  • What are the challenges in one field that can be effectively tackled by methods from the other field?
  • Is there an appealing middle ground between capturing action dynamics symbolically and model-free RL? Are there new domains appealing to RL that exhibit a structure which can be partially captured with a symbolic model? Can RL algorithms benefit from planning techniques using learned state-representations? How can these middle grounds be exploited in practice? Are they relevant for industrial applications?
  • What are the settings that are both challenging and attractive for both communities?


Important Dates:
  • Submission deadline: March 20th, 2020 (UTC-12 timezone)
  • Notification date: April 15th, 2020
  • Camera-ready deadline: May 15th, 2020
  • Workshop date: June 15 or 16 (TBD), 2020
Submission Procedure:
We solicit workshop paper submissions relevant to the above call of the following types:
  • Long papers -- up to 8 pages + unlimited references / appendices
  • Short papers -- up to 4 pages + unlimited references / appendices
  • Extended abstracts -- up to 2 pages + unlimited references / appendices
Please format submissions in AAAI style (see instructions in the Author Kit at and keep them to at most 9 pages including references. Authors considering submitting to the workshop papers rejected from the main conference, please ensure you do your utmost to address the comments given by ICAPS reviewers. Please do not submit papers that are already accepted for the main conference to the workshop.
Some accepted long papers will be accepted as contributed talks. All accepted long and short papers and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers or challenges that might help bridge the gap.
Paper submissions should be made through EasyChair:
Workshop Organizers
Alan Fern, Oregon State University, USA
Vicenç Gómez, Universitat Pompeu Fabra, Barcelona, Spain
Anders Jonsson, Universitat Pompeu Fabra, Barcelona, Spain.
Michael Katz, IBM T.J. Watson Research Center, NY, USA
Hector Palacios, Element AI, Montreal, Canada
Scott Sanner, University of Toronto, Toronto, Canada