Adversarial Machine Learning: Are We Playing the Wrong Game?

David Evans
CISPA Distinguished Lecture
Center for IT-Security, Privacy and Accountability, Universität des Saarlandes
Saarbrücken, Germany
10 July 2017

Abstract

Machine learning classifiers are increasingly popular for security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can be thwarted by motivated adversaries who adaptively construct adversarial examples that exploit flaws in the classifier's model. Much work on adversarial examples has focused on finding small distortions to inputs that fool a classifier. Previous defenses have been both ineffective and very expensive in practice. In this talk, I'll describe a new very simple strategy, feature squeezing, that can be used to harden classifiers by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different inputs in the original space into a single sample. Adversarial examples can be detected by comparing the model’s predictions on the original and squeezed sample. In practice, of course, adversaries are not limited to small distortions in a particular metric space. Indeed, in security applications like malware detection it may be possible to make large changes to an input without disrupting its intended malicious behavior. I'll report on an evolutionary framework we have developed to search for such adversarial examples that can automatically find evasive variants against state-of-the-art classifiers. This suggests that work on adversarial machine learning needs a better definition of adversarial examples, and to make progress towards understanding how classifiers and oracles perceive samples differently.

Speaker's Bio

David Evans (University of Virginia) is a Professor of Computer Science at the University of Virginia where he leads the Security Research Group. He is the author of an open computer science textbook and a children's book on combinatorics and computability. He is Program Co-Chair for ACM Conference on Computer and Communications Security (CCS) 2017, and previously was Program Co-Chair for the 31st (2009) and 32nd (2010) IEEE Symposia on Security and Privacy (where he initiated the SoK papers). He has SB, SM and PhD degrees in Computer Science from MIT and has been a faculty member at the University of Virginia since 1999.

Links

Project site: evadeML.org