Amit Dhurandhar  Amit Dhurandhar photo         

contact information

Research Scientist - machine learning, data mining
Thomas J. Watson Research Center, Yorktown Heights, NY USA


more information

More information:  Resume  |  Research Statement


Welcome To Amit Dhurandhar's Webpage..

I am originally from Pune, India. I am now a research staff member at IBM T.J. Watson in Yorktown Heights NY. I completed my Ph.D. in the Department of Computer and Information Science and Engineering at the University of Florida (UF), Gainesville. My advisor was Dr. Alin Dobra. My primary research areas are Machine learning and Data Mining.
I admire originality and brilliance but believe that having the right attitude is more important in life. 
Whats new?
  • Invited to attend a Schoss Dagstuhl Seminar in 2021.
  • Paper on IRM for ITE accepted to ICASSP, 2021.
  • Paper on OoD generalization accepted to AISTATS, 2021.
  • Paper theoretically comparing ERM to IRM accepted to ICLR, 2021.
  • Gave industry keynote at ACM CODS-COMAD, 2021 (Talk link).
  • Paper on explaining anomalies accepted to AAAI, 2021.
  • Paper on model agnostic PU learning was selected as Best of ICDM, 2020.
  • Our blog on counterfactual vs contrastive explanations in towardsdatascience, 2020.
    • This led to co-organizing an event on algorithmic recourse, whose recording is available here.
  • My blog on XAI in KDnuggets, 2020.
  • Invited to be on the Scientific Advisory Board for Beyond Explainable Artificial Intelligence initiative led by Andreas Holzinger and Wojciech Samek, 2020.
  • Slides for knowledge transfer to simple models talk given at Harvard, 2020.
  • Two papers on explainable AI accepted to NeurIPS, 2020.
  • Tutorial on Human-Centered Explainability in Healthcare presented at KDD, 2020.
  • Paper on AIX360 explainability toolkit accepted to JMLR, 2020.
  • Two papers (1 explainability and 1 causality) accepted to ICML, 2020.
  • Workshop on Human Interpretability in Machine Learning (WHI) accepted to ICML, 2020.
  • Hands-on tutorial on AI Explainability 360 presented at FAccT, 2020.
  • Two papers mentioned as recent breakthroughs in olfaction using machine learning, 2019.
  • Tutorial on AI Explainability 360 given at MIT, 2019. (Video link)
  • Co-Lead in creation of open source AI Explainability 360 Toolkit, 2019. (Covered by VentureBeat, BetaNews, ZDNet)
  • Paper on selecting prototypical examples accepted to ICDM as regular paper, 2019.
  • Paper on teaching explanations accepted for a oral presentation to AIES, 2019.
  • Invited to attend a Schoss Dagstuhl Seminar in 2019.
  • Our work on improving simple models and contrastive explanations was featured in PC magazine, 2018.
  • Paper on predicting smells using natural language and interpretable methods accepted to Nature Communications, 2018. (Featured in Quartz)
  • Two papers on explainable AI accepted to NeurIPS, 2018.
  • Invited talk on formalizing interpretability given in the interpretability session at the European Conference on Data Analysis, 2018.
  • Our paper on contrastive explanations for deep learning models featured in Forbes, 2018.
  • Predicting Human Olfactory Perception from Chemical Features of Odor Molecules Paper accepted to Science, 2017. (New Yorker, Atlantic, Science News, The Biological Scene)
    • It has been highlighted in the annual AAAS meeting as one of the breakthroughs published by Science. It is considered an advance in the field beyond what has been seen in the past three decades.
  • Paper on a new clustering paradigm accepted to SDM 2017.
  • NSF-SBIR Grant Panelist, 2016-2017.