AI Meets Security Symposium '19 - overview
The AI Meets Security Symposium '19 is being held in conjunction with the IBM Research AI Week at the MIT-IBM Watson AI Lab in September 2019.
As cyber security threats to enterprises and the cloud continue to become more sophisticated, stealthy, and devastating—some of which even weaponize AI technologies—, security operations teams struggle to keep up with detecting, managing, and countering cyber attacks, as well as proactively deploying protective measures. The security industry and practitioners are experimenting with AI and machine learning technologies in different areas of security operations, including the identification of security relevant (mis)behaviors and malware, extraction and consolidation of threat intelligence, reasoning over security alerts, and recommendation of countermeasures and/or protective measures.
At the same time, adversarial attacks on machine learning systems have become an indisputable threat. Attackers can compromise the training of machine learning models by injecting malicious data into the training set (so-called poisoning attacks), or by crafting adversarial samples that exploit the blind spots of Machine Learning models at test time (so-called evasion attacks). Adversarial attacks have been demonstrated in a number of different application domains, including malware detection, spam filtering, visual recognition, speech-to-text conversion, and natural language understanding. Devising comprehensive defenses against poisoning and evasion attacks by adaptive adversaries is still an open challenge. Thus, gaining a better understanding of the threat by adversarial attacks and developing more effective defense systems and methods is paramount for the adoption of Machine Learning systems in security-critical real-world applications.
- Ian Molloy (IBM Research)
- JR Rao (IBM Research)
- Mathieu Sinn (IBM Research)
- Marc Ph. Stoecklin (IBM Research)
Stratton Student Center, 84 Massachusetts Ave. Cambridge, MA 02139