security

Are adversarial examples inevitable?

A pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?

Continue reading

Poison Frogs! Targeted Poisoning Attacks on Neural Networks

Data poisoning is an adversarial attack in which examples are added to the training set of a classifier to manipulate the behavior of the model at test time. We propose a new poisoning attack that is effective on neural nets, and can be executed by an outsider with no control over the training process.

Continue reading