Amit Dhurandhar  Amit Dhurandhar photo         

contact information

Research Scientist - machine learning, data mining
Thomas J. Watson Research Center, Yorktown Heights, NY USA
  +1dash914dash945dash1325

links


more information

More information:  Resume  |  Research Statement

profile


Welcome To Amit Dhurandhar's Webpage..

I am originally from Pune, India. I am now a research staff member at IBM T.J. Watson in Yorktown Heights NY. I completed my Ph.D. in the Department of Computer and Information Science and Engineering at the University of Florida (UF), Gainesville. My advisor was Dr. Alin Dobra. My primary research areas are Machine learning and Data Mining.
 
I admire originality and brilliance but believe that having the right attitude is more important in life. 
 
 
Whats new?
 
  • My blog on XAI in KDnuggets, 2020.
  • Invited to be on the Scientific Advisory Board for Beyond Explainable Artificial Intelligence initiative led by Andreas Holzinger and Wojciech Samek, 2020.
  • Slides for knowledge transfer to simple models talk given at Harvard, 2020.
  • Two papers on explainable AI accepted to NeurIPS, 2020.
  • Tutorial on Human-Centered Explainability in Healthcare presented at KDD, 2020.
  • Paper on model agnostic PU learning accepted to ICDM as regular paper, 2020.
  • Paper on AIX360 explainability toolkit accepted to JMLR, 2020.
  • Two papers (1 explainability and 1 causality) accepted to ICML, 2020.
  • Workshop on Human Interpretability in Machine Learning (WHI) accepted to ICML, 2020.
  • Hands-on tutorial on AI Explainability 360 presented at FAccT, 2020.
  • Two papers mentioned as recent breakthroughs in olfaction using machine learning, 2019.
  • Tutorial on AI Explainability 360 given at MIT, 2019. (Video link)
  • Co-Lead in creation of open source AI Explainability 360 Toolkit, 2019. (Covered by VentureBeat, BetaNews, ZDNet)
  • Paper on selecting prototypical examples accepted to ICDM as regular paper, 2019.
  • Paper on teaching explanations accepted for a oral presentation to AIES, 2019.
  • Invited to attend a Schoss Dagstuhl Seminar in 2019.
  • Our work on improving simple models and contrastive explanations was featured in PC magazine, 2018.
  • Paper on predicting smells using natural language and interpretable methods accepted to Nature Communications, 2018. (Featured in Quartz)
  • Two first author papers on interpretable AI accepted to NeurIPS, 2018.
  • Invited talk on formalizing interpretability given in the interpretability session at the European Conference on Data Analysis, 2018.
  • Our paper on contrastive explanations for deep learning models featured in Forbes, 2018.
  • Predicting Human Olfactory Perception from Chemical Features of Odor Molecules Paper accepted to Science, 2017. (New Yorker, Atlantic, Science News, The Biological Scene)
    • It has been highlighted in the annual AAAS meeting as one of the breakthroughs published by Science. It is considered an advance in the field beyond what has been seen in the past three decades.
  • First author paper on a new clustering paradigm accepted to SDM 2017.
  • NSF-SBIR Grant Panelist, 2016-2017.
  • Appeared in IBM Journal of Eminence, 2016.
  • Paper published in KAIS Journal, 2016.
  • ICDM paper was selected as Best of ICDM, 2015.
  • AAAI paper won Deployed Application Award, 2015.