Get the latest tech news
Invisibility Cloak
Overview This paper studies the art and science of creating adversarial attacks on object detectors. Most work on real-world adversarial attacks has focused on classifiers, which assign a holistic label to an entire image, rather than detectors which localize objects within an image.
Detectors work by considering thousands of “priors” (potential bounding boxes) within the image with different locations, sizes, and aspect ratios. In this work, we present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. It features a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors.
Or read this on Hacker News