AI Security & Privacy Solutions       

links

 Annie K Abay photo photoNathalie Baracaldo photo Ebube Chuba photo photoShashank Rajamoni photo Ambrish Rawat photo Lei Yu photo Yi Zhou photo

AI Security & Privacy Solutions - overview


Overview

The AI Security & Privacy Solutions group at IBM Research-Almaden works under the AI Platforms organization, and is based in San Jose, CA. Nathalie Baracaldo leads the group. As such, the group works to make AI platforms safe and private for all stakeholders.

Adoption of machine learning exposes enterprises to novel security and privacy risks; in order to ensure that implementation is successful and secure, these risks need to be addressed. In this effort, our group develops solutions to detect and mitigate vulnerabilities and risks inherent to machine learning systems. In particular, our team targets risk related to adversarial machine learning attacks and risk of privacy exposure through the use of federated learning.

  • Our research on adversarial machine learning focuses on identifying threats to the training and deployment of learned systems, and developing corresponding defense strategies. Learn more about our efforts here
  • Our group also focuses on federated learning to prevent privacy leakages by ensuring models can be trained collaboratively without transmitting data to a central place, ensuring no inference attacks can occur during or based on the final machine learning model trained. 

Please find more information about our work in the interactive experiences and publications linked below, and on our website!

      Federated-learning-lib                

Check our IBM federated learning git repo and learn how to use it with our tutorials. This is an industry ready framework. Also, take a look at our white paper!  

      FFL Podcast           

      Data Science Podcast - Federated learning, special guest Nathalie Baracaldo           

      FFL Demo 

       Demo: understanding federated learning     

     

      Fool the AI Game         

      Want to learn about neural networks' backdoors and how to defend them? Play our interactive game! 

     

 


Articles, Blog Posts and other resources

Check our talk on What is federated learning and why it matters? where we explain some of our work on training neural networks in federated learning settings and the capabilities of IBM federated learning (May 2020)

We presented our research on Federated decision tree and gradient boost. Integrating multiple federated models (June 2020)

Our research on Federated Learning was highlighted at Think2020 (minute 42): IBM Think Digital Event Experience

Demo: How does federated learning work? 

Play our game: Fool the AI

Private federated learning: Learn together without sharing data (Nov. 2019)

Use cases for federated learning (Jan. 2020)

Slides  USENIX2019 paper 

Slides Hybrid Alpha AISec2019 slides

SAFEAI 2019 - Best Paper Award

Some of our  work is contributed to ART Toolkit: The Adversarial Robustness Toolbox v0.3.0: Closing the Backdoor in AI Security


Publications

2020

IBM Federated Learning: an Enterprise Framework White Paper V0. 1 Ludwig, Heiko and Baracaldo, Nathalie and Thomas, Gegi and Zhou, Yi and Anwar, Ali and Rajamoni, Shashank and Ong, Yuya and Radhakrishnan, Jayaram and Verma, Ashish and Sinn, Mathieu and others

Technical Report, 2020

TiFL: A Tier-based Federated Learning System
Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng. ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC), 2020

Position: The Case for Benchmarking Control Operations in Cloud Native Storage
Alex Merenstein, Vasily Tarasov, Ali Anwar, Deepavali Bhagwat, Lukas Rupprecht, Dimitris Skourtis, Erez Zadok. 12th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage), 2020

Position: Can Microservices Drive a Renaissance in Workload-Aware Storage Management? (Poster)
Pranav Bhandari, Avani Wildani, Dimitris Skourtis, Vasily Tarasov, Deepavali Bhagwat, Lukas Rupprecht, Ali Anwar. 12th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage) , 2020

DupHunter: Flexible High-Performance Deduplication for Docker Registries
Nannan Zhao, Hadeel Albahar, Subil Abraham, Keren Chen, Vasily Tarasov, Dimitrios Skourtis, Lukas Rupprecht, Ali Anwar, Ali R. Butt. USENIX Annual Technical Conference (USENIX ATC), 2020
 
Ao Wang, Jingyuan Zhang, Xiaolong Ma, Ali Anwar, Lukas Rupprecht, Dimitrios Skourtis, Vasily Tarasov, Feng Yan, Yue Cheng. 18th USENIX Conference on File and Storage Technologies (USENIX FAST), 2020 

Customizable Scale-Out Key-Value Stores
Ali Anwar, Yue Cheng, Hai Huang, Jingoo Han, Hyogi Sim, Dongyoon Lee, Fred Douglis, Ali R. Butt. Transactions on Parallel and Distributed Systems (TPDS), 2020
 

2019

Chen, Bryant, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and Biplav Srivastava. AAAI Collocated: The AAAI's Workshop on Artificial Intelligence Safety (SafeAI), 2019

Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar and Heiko Ludwig
The 12th ACM Workshop on Artificial Intelligence and Security (AISec 2019). [AISec slides]
 
Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang. and Yi Zhou
The 12th ACM Workshop on Artificial Intelligence and Security (AISec 2019)
An arXiv preprint version can be found at https://ibm.ent.box.com/file/514426847528
 
Guanghui Lan, Zhize Li, Yi Zhou. The 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
 
Towards Federated Graph Learning Platform for Anti-Money Laundering
Toyotaro Suzumura, Yi Zhou, Nathalie Baracaldo, Guangann Ye, Keith Houck, Ryo Kawahara, Ali Anwar, Lucia Larise Stavarache, Daniel Klyashtorny, Heiko Ludwig, and Kumar Bhaskaran 
NeurIPS FSS workshop, 2019
 
Chai, Z., Fayyaz, H., Fayyaz, Z., Anwar, A., Zhou, Y., Baracaldo, N., Ludwig, H. and Cheng, Y., 2019.
2019 {USENIX} Conference on Operational Machine Learning (OpML 19) (pp. 19-21)
Privacy-Preserving Process Mining 
Felix Mannhardt, Agnes Koschmider, Nathalie Baracaldo, Matthias Weidlich, Judith Michael 
Business & Information Systems Engineering, 2019
 
Confidentiality of Data in the Cloud 
N Baracaldo, J Glider 
Security, Privacy, and Digital Forensics in the Cloud, John Wiley & Sons, 2019
 
F Mannhardt, A Koschmider, N Baracaldo, M Weidlich, J Michael. Informatik Spektrum, 1-3
 
Michael, J., Koschmider, A., Mannhardt, F., Nathalie, B., Bernhard, R..Informatik Spektrum, 42, pages347–348(2019)

2018

"Game for Detecting Backdoor Attacks on Deep Neural Networks using Activation Clustering" 
Casey Dugan, Werner Geyer, Aabhas Sharma, Ingrid Lange, Dustin Ramsey Torres, Bryant Chen, Nathalie Baracaldo Angel, Heiko Ludwig 
Thirty-second Conference on Neural Information Processing Systems (NIPS), 2018 
Abstract

Adversarial Robustness Toolbox v0.3.0 
Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards

Complex Collaborative Physical Process Management: A Position on the Trinity of BPM, IoT and DA
Paul Grefen, Heiko Ludwig, Samir Tata, Remco Dijkman, Nathalie Baracaldo, Anna Wilbik and Tim D'Hondt 
Proceedings 19th IFIP/SOCOLNET Working Conference on Virtual Enterprises, Springer, 2018

Detecting Poisoning Attacks on Machine Learning in IoT Environments (Best paper award) 
Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Amir Safavi, Rui Zhang 
IEEE International Congress on Internet of Things (ICIOT), 2018

2017

Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach 
Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Jaehoon Amir Safavi 
CCS Collocated: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 103--110, ACM, 2017

"Detecting Causative Attacks using Data Provenance"
Nathalie Baracaldo, Bryant Chen and Heiko Ludwig 
ICML Workshop: Private and Secure Machine Learning 2017

Securing Data Provenance in Internet of Things (IoT) Systems 
Baracaldo, Angel and Engel, Robert and Tata, Samir and Ludwig, Heiko 
Service-Oriented Computing--ICSOC 2016 Workshops: ASOCA, ISyCC, BSCI, and Satellite Events, Banff, AB, Canada, October 10--13, 2016, Revised Selected Papers, pp. 92, 2017

Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach 
Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Safavi, Jaehoon Amir 
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 103--110, 2017