Robust Physical Adversarial Attack on Faster R-CNN Object Detector


  • Mon 16 April 2018
  • Carter Yagemann

Figure 1

We have release a new code repository for physically attacking Faster R-CNN.

In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Our approach can generate perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.

The arXiv paper is available here.