Professional AssociationsProfessional Associations: AAAI | INFORMS | Society for Industrial and Applied Mathematics
more informationMore information: Personal Website
I am a Research Staff Member (Research Scientist) at IBM Research, Thomas J. Watson Research Center working in the intersection of Optimization and Machine Learning / Deep Learning. I am also the Principal Investigator of ongoing MIT-IBM Watson AI Lab projects. I proposed a new algorithm for machine learning problems called SARAH (which is named after my daughter's name Sarah H. Nguyen) for solving convex and nonconvex large scale optimization problems. This paper is published in The 34th International Conference on Machine Learning (ICML 2017). At IBM Research, my work on "Stochastic Gradient Methods: Theory and Applications" was selected for 2021 IBM Research Accomplishments.
I currently serve as an Action Editor for Journal of Machine Learning Research, Machine Learning, and Neural Networks journals, an Associate Editor for IEEE Transactions on Neural Networks and Learning Systems, Journal of Optimization Theory and Applications journals, an Area Chair for ICML, NeurIPS, ICLR, AAAI, UAI, and AISTATS conferences. I also serve as a Panelist for National Science Foundation (NSF).
I was born in Hanoi, Vietnam, but grew up in Moscow, Russia. I got my Bachelor degree in Applied Mathematics and Computer Science from Faculty (Department) of Computational Mathematics and Cybernetics, Lomonosov Moscow State University in 2008 under the supervision of Prof. Vladimir I. Dmitriev. I also received my M.B.A. degree from McNeese State University, Louisiana in 2013. I got my Ph.D. degree in the Department of Industrial and Systems Engineering at Lehigh University in 2018. I was working with Dr. Katya Scheinberg and Dr. Martin Takáč in the area of Large Scale Optimization for Machine Learning and Stochastic Optimization. During my Ph.D., I was also working with Dr. Alexander Stolyar in the area of Applied Probability, Stochastic Models and Optimal Control. I have won the 2019 P.C. Rossin College of Engineering and Applied Science Elizabeth V. Stout Dissertation Award.
I am very open to collaboration with highly motivated researchers and students. Please feel free to contact me if you would like to have collaborations. Here is my CV.
Fields of interest:
- Design and Analysis of Learning Algorithms
- Optimization for Representation Learning
- Federated Learning
- Reinforcement Learning
- Time Series
- Trustworthy / Explainable AI
- Action Editor / Associate Editor: Journal of Machine Learning Research (2022 - Present), Machine Learning (2021 - Present), Neural Networks (2022 - Present), IEEE Transactions on Neural Networks and Learning Systems (2022 - Present), Journal of Optimization Theory and Applications (2022 - Present).
- Area Chair / Meta-Reviewer / Senior Program Committee: ICML (2020, 2021, 2022), NeurIPS (2022), ICLR (2021, 2022), AISTATS (2021, 2022), UAI (2022), AAAI (2022).
- Grant Reviewer: National Science Foundation, AI Singapore Research Programme.
- Reviewer / Program Committee: ICML, NIPS/NeurIPS, ICLR, AISTATS, COLT, UAI, AAAI, IJCAI, CVPR, ICCV, ECCV.
- Reviewer: Journal of Machine Learning Research, Mathematical Programming, SIAM Journal on Optimization, SIAM Journal on Numerical Analysis, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Signal Processing, Artificial Intelligence, Optimization Methods and Software, SIAM Journal on Mathematics of Data Science.
- Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takac. "SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient". The 34th International Conference on Machine Learning (ICML 2017).
- Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtarik, Katya Scheinberg, Martin Takac. "SGD and Hogwild! Convergence Without the Bounded Gradients Assumption". The 35th International Conference on Machine Learning (ICML 2018).
- Lam M. Nguyen, Quoc Tran-Dinh, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk. "A Unified Convergence Analysis for Shuffling-Type Gradient Methods". The Journal of Machine Learning Research 2021.