I am a Research Staff Member at IBM Research and MIT-IBM Watson AI Lab. My main research goal is build reliable AI solutions. My research interests span several areas in machine learning and artificial intelligence this includes Bayesian inference, deep generative modeling, uncertainty quantification and learning with limited data. My current work focuses on developing theory and practical systems for machine learning applications that demand constraints such as reliability, fairness, and interpretability. I am a core contributor to several open-source trustworthy AI toolkits - AI Fairness 360, AI Explainability 360, and Uncertainty Quantification 360.