AAAI-17 Workshop: Human-Aware Artificial Intelligence (HAAI-17) - Speakers

The workshop features a very exciting speaker program, with experts from the field expressing their views on the subject of human-aware AI techniques, domains, and systems. 

Murray S. Campbell Subbarao Kambhampati Craig Knoblock Lokesh Johri
Karen Myers Francesca Rossi Milind Tambe Alonso Vera
  • Murray S. Campbell
    IBM Research

    The Role of Dialog in Augmented Intelligence

    Teamwork is largely about communication. Sharing of goals, capabilities and knowledge is essential for effective teaming. Interactive dialog is perhaps the most natural approach to such sharing, but there are many technical challenges that need to be addressed before AI systems can effectively partner through dialog. In this talk I will review some of these challenges, and describe some of the progress that has been made toward dialog-based augmented intelligence.

    Bio: Murray Campbell is a Distinguished Research Staff Member at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY. His research is currently focused on the development of task-oriented conversational systems. He was a member of the team that developed Deep Blue, which was the first computer to defeat the human world chess champion in a match. Campbell received numerous awards for Deep Blue, including the Allen Newell Medal for Research Excellence and the Fredkin Prize. He has applied his experience in artificial intelligence to a number of areas, including finance, public health, and workforce. He received his Ph.D. in Computer Science from Carnegie Mellon University, and is an ACM Distinguished Scientist and a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI).
  • Subbarao Kambhampati
    Arizona State University

    Planning Challenges in Human-Aware AI Systems

    Like much of AI, research into automated planning has, for the most part, focused on planning a course of actions for autonomous agents acting in isolation. Humans--if allowed in the loop at all--were mostly used as a crutch to improve planning efficiency.

    The significant current interest in human-aware AI systems brings with it a fresh set of planning challenges for a planning agent, including the need to model and reason about the capabilities of the humans in the loop, the need to recognize their intentions so as to provide proactive support, the need to project its own intentions so that its behavior is explainable to the humans in the loop, and finally the need for evaluation metrics that are sensitive to human factors. These challenges are complicated by the fact that the agent has at best highly incomplete models of the intentions and capabilities of the humans.

    In this talk, I will discuss these challenges in adapting/extending planning technology to support teaming and cohabitation between humans and automated agents. I will then describe our recent research efforts to address these challenges, including novel planning models that, while incomplete, are easier to learn; planning and plan recognition techniques that can leverage these incomplete models to provide
    stigmergic and proactive assistance, while exhibiting ``explainable'' behaviors. I will conclude with an evaluation of these techniques within human-robot teaming scenarios.

    Bio: Subbarao Kambhampati is a professor at Arizona State University. His current research interests are in human-aware AI, with emphasis on collaborative planning and decision support. He is a fellow, and the president of the association for the advancement of AI (AAAI). He was the program chair for IJCAI 2016, for which the special theme was "Human-Aware AI"

  • Craig Knoblock
    University of Southern California (USC) / Information Science Institute (ISI)

    Lessons Learned in Building Human-Aware Systems for Data Science

    Researchers like to build complete solutions to data science problems with the assumption that human users will accept the results and use them.  However, in practice automated results will often need to be refined and even if they are accurate, a human user needs to be convinced of the correctness of the results.  In this talk, I will present work on three problems: learning source models, linking data across sources, and building knowledge graphs, and I will describe the lessons learned in developing solutions that effectively interact with humans.

    Bio: Craig Knoblock is a Research Professor of both Computer Science and Spatial Sciences at the University of Southern California (USC), Research Director of Information Integration at the Information Sciences Institute, and Associate Director of the Informatics Program at USC.   He received his Bachelor of Science degree from Syracuse University and his Master’s and Ph.D. from Carnegie Mellon University in computer science. His research focuses on techniques for describing, acquiring, and exploiting the semantics of data.  He has worked extensively on source modeling, schema and ontology alignment, entity and record linkage, data cleaning and normalization, extracting data from the Web, and combining these techniques to build knowledge graphs. Dr. Knoblock is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Distinguished Scientist of the Association of Computing Machinery (ACM), a Senior Member of IEEE, past President and Trustee of the International Joint Conference on Artificial Intelligence (IJCAI), and winner of the 2014 Robert S. Engelmore Award. 

  • Karen Myers
    SRI International

    Artificial Intelligence for Augmenting Human Capabilities

    The field of AI was motivated originally by the objective of automating tasks performed by humans. While advances in machine learning have enabled impressive capabilities such as self-driving vehicles, more cognitive tasks such as planning or design have resisted full automation because of the vast amounts of knowledge and commonsense reasoning that they require. This talk describes a line of research aimed at developing AI systems that are designed to augment rather than replace human capabilities, leveraging automated planning, machine learning, and natural language understanding technologies. It also presents several successful applications of the research in deployed systems.

    Bio: Dr. Karen Myers is a program director and principal scientist in SRI International's Artificial Intelligence Center, where she leads a team focused on developing intelligent systems that facilitate man-machine collaboration. Her research interests include autonomy, multi-agent systems, automated planning and scheduling, personalization technologies, and mixed-initiative problem solving. In addition to being widely published, she has overseen several successful transitions of her research into use both by the U.S. Government and within commercial settings. She has been an associate editor for the journal Artificial Intelligence, a member of the editorial boards for the Journal of Artificial Intelligence Research and the ACM Transactions on Intelligent Systems and Technology, and a member of the executive council for the Association for the Advancement of Artificial Intelligence. Dr. Myers has a Ph.D. in computer science from Stanford University, a B.Sc. in mathematics and computer science from the University of Toronto, and a degree in piano performance from the Royal Conservatory of Music.

  • Francesca Rossi
    IBM Research / University of Padova

    Ethical Issues in AI

    The recent surge of AI applications, due mainly to the enhanced perception capabilities provided by machine learning techniques, has shown the huge potential of AI to solve some of the world’s most enduring problems. AI can potentially help cure diseases, solve planetary problems, manage the global economy, and revolutionise many sectors such as education, retailing, finance, and manufacturing.
    However, such a pervasive and powerful deployment of AI systems raises several ethical and sociological issues that need to be addressed in the most effective way in order to make Ai as beneficial as possible for as many people as possible. This holds for both autonomous systems, that will need to discriminate between good and bad decision, also on ethical and moral terms, and for systems that work in tight collaboration with humans, where the correct level of trust needs to be built.

    In this talk I will discuss some of the ethical issues and considerations that need attention and responses, as well as some technical challenges to addressed to make AI ethical and trustworthy.

    Bio: Francesca Rossi is a research scientist at the IBM T.J. Watson Research Centre, and an professor of computer science at the University of Padova, Italy, currently on leave. Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. She is a AAAI and a EurAI fellow, and a Radcliffe fellow 2015. She has been president of IJCAI and an executive councillor of AAAI. She is Editor in Chief of JAIR and a member of the editorial board of Constraints, Artificial Intelligence, AMAI, and KAIS. She co-chairs the AAAI committee on AI and ethics and she is a member of the scientific advisory board of the Future of Life Institute. She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she belongs to the World Economic Forum Council on AI and robotics.

    She has given several media interviews about the future of AI and AI ethics (including to the Wall Street Journal, the Washington Post, Motherboard, Science, The Economist, CNBC, Eurovision, Corriere della Sera, and Repubblica) and she has delivered three TEDx talks on these topics.

  • Milind Tambe
    University of Southern California (USC)

    Human-aware AI for Social Good

    Discussions about the future negative consequences of AI sometimes drown out discussions of the current accomplishments and future potential of AI in helping us solve complex societal problems.  At the USC Center for AI in Society, CAIS, our focus is on exploring AI research in tackling wicked problems in society. This talk will highlight the goals of CAIS and three areas of ongoing work that have led to decision aids for humans in the field. First, we will focus on the use of AI for assisting low-resource sections of society, such as homeless youth. Harnessing the social networks of such youth, we will illustrate the use of AI algorithms to help more effectively spread health information, such as for reducing risk of HIV infections. These algorithms have been piloted in homeless shelters in Los Angeles, and have shown significant improvements over traditional methods. This will be the major portion of the talk. Second, we will outline the use of AI for protection of forests, fish, and wildlife; learning models of adversary behavior allows us to predict poaching activities and plan effective patrols to deter them; we discuss concrete results from tests in a national park in Uganda. Finally, we will briefly review our earlier work on the challenge of AI for public safety and security, specifically for effective security resource allocation. We will discuss our "security games" framework that has led to decision aids that are in actual daily use by agencies such as the US Coast Guard, the US Federal Air Marshals Service and local law enforcement agencies to assist the protection of ports, airports, flights, and other critical infrastructure.

    These are just a few of the projects at CAIS, and we expect these and future projects at CAIS to continue to illustrate the significant potential that AI has for social good. We will draw lessons for deploying decision aids in the field to assist humans. 

    Bio: Milind Tambe is Founding Co-Director of CAIS, the USC Center for AI in Society, and Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California(USC). He is a fellow of AAAI and ACM, as well as recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, INFORMS Wagner prize for excellence in Operations Research practice, Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa foundation faculty research award, RoboCup scientific challenge award, and other local awards such as the Orange County Engineering Council Outstanding Project Achievement Award, USC Associates award for creativity in research and USC Viterbi use-inspired research award. 
  • Lokesh Johri
    Panelist: Regulations & Conventions for Deploying AI Systems for General Use

    Bio: Lokesh Johri is founder and CEO of Tantiv4, a Silicon Valley based IoT startup. He is working with his team to come up with smart connected products that fit into the daily lives of people. These products provide location-aware in-context cloud-based decision making support to consumers, within their homes. For Tantiv4, the mantra is that in the daily lives of people technology and AI systems should integrate very unobtrusively. In the past, he has been a principal at a design startup - Tallika, which had a successful exit. His experience includes engineering and management positions at Schlumberger, Calient Networks, and Agilent.