Enabling Trusted AI Workshop 2019       


Enabling Trusted AI Workshop 2019 - overview

Date: Wed, September 18, 2019

Location: The MIT Samberg Conference Center 


AI has made significant advances in the past decade, leading to its usage in various important decision-making scenarios, such as credit scoring, criminal justice, and job recruiting. The trend towards this increased usage in important domains has underscored the need to ensure trust in AI systems.

This workshop will focus on this topic of ensuring trust in AI systems.  It will contain a tutorial, invited talks, and a panel. The morning will feature a tutorial on the pillars of Trusted AI and how they connect to the AI lifecycle. It will focus, in detail on two open source toolkits from IBM Research: AI Fairness 360 and AI Explainability 360, that enable researches to explore open research questions and practitioners to improve trust in the AI systems that they build. The afternoon will focus on trustworthy generation of data and models, featuring invited talks and a panel discussion.


09:00-  9:15 - Introduction and Trusted AI Overview

09:15-  10:45   AI Fairness 360 Tutorial: Prasanna Sattigeri

10:45 - 11:15 - Coffee Break

11:15 - 12:30 - AI Explainability 360 Tutorial: Amit Dhurandhar

12:30-  1:30 - Lunch Outside

1:30 - 1:50 - Payel Das (Opening remarks on trustworthy generation) 

1:55 - 2:10 Philip Isola (Title: GANalyze: Toward Visual Definitions of Cognitive Image Properties)

2:10 - 2:40 Tommi Jaakolla (TBA)

2:40 - 3:10 Rose Yu  (Title: Physics Guided AI for Large-Scale Spatiotemporal Learning)

3:10 - 3:25 Coffee Break

3:25 - 3:45 Hendrik Strobelt  (TBA)

3:45 - 4:05 Tianxiao Shen  (Title: Mixture Models for Diverse Machine Translation)

4:10 - 4:55 - Panel on "Prospect and challenges of generative models for enabling trust"

Panelists - Tommi Jaakkola, Prasanna Sattigiri, Philip Isola, Rose Yu, Sebastian Gehrmann, Murat Kocaoglu.

Moderator - Hendrik Strobelt and Sam Hoffman. 

4:55 - 5 pm Closing remarks and wrap-up