Invisibility cloak
We construct clothing that makes the wearer invisible to common object detectors.
We construct clothing that makes the wearer invisible to common object detectors.
We show that content control systems are vulnerable to adversarial attacks. Using small perturbations, we can fool important industrial systems like YouTube’s Content ID.
Adversarial training hardens neural nets against attacks, but it costs 10-100X more than regular training. We show how to do adversarial training with no added cost, and train a robust ImageNet model on a desktop computer in just a day.
A pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?
Data poisoning is an adversarial attack in which examples are added to the training set of a classifier to manipulate the behavior of the model at test time. We propose a new poisoning attack that is effective on neural nets, and can be executed by an outsider with no control over the training process.