AI Security & Privacy Solutions       

links

 Annie K Abay photo photoNathalie Baracaldo photo Ebube Chuba photoShashank Rajamoni photo Yi Zhou photo

AI Security & Privacy Solutions - overview


Overview

The AI Security & Privacy Solutions group at IBM Research-Almaden works under the AI Platforms organization, and is based in San Jose, CA. Nathalie Baracaldo leads the group. As such, the group works to make AI platforms safe and private for all stakeholders.

Adoption of machine learning exposes enterprises to novel security and privacy risks; in order to ensure that implementation is successful and secure, these risks need to be addressed. In this effort, our group develops solutions to detect and mitigate vulnerabilities and risks inherent to machine learning systems. In particular, our team targets risk related to adversarial machine learning attacks and risk of privacy exposure through the use of federated learning.

  • Our research on adversarial machine learning focuses on identifying threats to the training and deployment of learned systems, and developing corresponding defense strategies. Learn more about our efforts here
  • Our group also focuses on federated learning to prevent privacy leakages by ensuring models can be trained collaboratively without transmitting data to a central place, ensuring no inference attacks can occur during or based on the final machine learning model trained. 

Please find more information about our work in the interactive experiences and publications linked below.

                     Fool the AI Game               FFL Demo


Articles and Blog Posts

Private federated learning: Learn together without sharing data (Nov. 2019)

SAFEAI 2019 - Best Paper Award

The Adversarial Robustness Toolbox v0.3.0: Closing the Backdoor in AI Security


Publications

2019

"HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning"
Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar and Heiko Ludwig
The 12th ACM Workshop on Artificial Intelligence and Security (AISec 2019). [AISec slides]
 
Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang. and Yi Zhou
The 12th ACM Workshop on Artificial Intelligence and Security (AISec 2019)
An arXiv preprint version can be found at https://ibm.ent.box.com/file/514426847528
 
Towards Federated Graph Learning Platform for Anti-Money Laundering
Toyotaro Suzumura, Yi Zhou, Nathalie Baracaldo, Guangann Ye, Keith Houck, Ryo Kawahara, Ali Anwar, Lucia Larise Stavarache, Daniel Klyashtorny, Heiko Ludwig, and Kumar Bhaskaran 
NeurIPS FSS workshop, 2019
 
Chai, Z., Fayyaz, H., Fayyaz, Z., Anwar, A., Zhou, Y., Baracaldo, N., Ludwig, H. and Cheng, Y., 2019.
2019 {USENIX} Conference on Operational Machine Learning (OpML 19) (pp. 19-21)
 
Privacy-Preserving Process Mining 
Felix Mannhardt, Agnes Koschmider, Nathalie Baracaldo, Matthias Weidlich, Judith Michael 
Business & Information Systems Engineering, 2019
 
Confidentiality of Data in the Cloud 
N Baracaldo, J Glider 
Security, Privacy, and Digital Forensics in the Cloud, John Wiley & Sons, 2019
 
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering (Best paper award) 
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy and Biplav Srivastava 
AAAI Collocated: The AAAI's Workshop on Artificial Intelligence Safety (SafeAI), 2019
 

2018

"Game for Detecting Backdoor Attacks on Deep Neural Networks using Activation Clustering" 
Casey Dugan, Werner Geyer, Aabhas Sharma, Ingrid Lange, Dustin Ramsey Torres, Bryant Chen, Nathalie Baracaldo Angel, Heiko Ludwig 
Thirty-second Conference on Neural Information Processing Systems (NIPS), 2018 
Abstract

Adversarial Robustness Toolbox v0.3.0 
Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards

Complex Collaborative Physical Process Management: A Position on the Trinity of BPM, IoT and DA
Paul Grefen, Heiko Ludwig, Samir Tata, Remco Dijkman, Nathalie Baracaldo, Anna Wilbik and Tim D'Hondt 
Proceedings 19th IFIP/SOCOLNET Working Conference on Virtual Enterprises, Springer, 2018

Detecting Poisoning Attacks on Machine Learning in IoT Environments (Best paper award) 
Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Amir Safavi, Rui Zhang 
IEEE International Congress on Internet of Things (ICIOT), 2018

2017

Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach 
Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Jaehoon Amir Safavi 
CCS Collocated: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 103--110, ACM, 2017

"Detecting Causative Attacks using Data Provenance"
Nathalie Baracaldo, Bryant Chen and Heiko Ludwig 
ICML Workshop: Private and Secure Machine Learning 2017

Securing Data Provenance in Internet of Things (IoT) Systems 
Baracaldo, Angel and Engel, Robert and Tata, Samir and Ludwig, Heiko 
Service-Oriented Computing--ICSOC 2016 Workshops: ASOCA, ISyCC, BSCI, and Satellite Events, Banff, AB, Canada, October 10--13, 2016, Revised Selected Papers, pp. 92, 2017

Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach 
Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Safavi, Jaehoon Amir 
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 103--110, 2017