Isn't that kind of the same thing? Adversarial examples work by matching what the neural net 'remembers' about the target classification, rather than being a picture of a thing in that class. Neural nets just find different features salient .
I've wondered in the past if we could use black box adversarial methods with Mechanical Turk to generate adversarial examples that work on humans. Maybe they'd end up looking like cartoons?
(Also agreed, some cartoon animals are just informed likeness - for instance Goofy doesn't look anything like a dog, at least to me.)
I've wondered in the past if we could use black box adversarial methods with Mechanical Turk to generate adversarial examples that work on humans. Maybe they'd end up looking like cartoons?
(Also agreed, some cartoon animals are just informed likeness - for instance Goofy doesn't look anything like a dog, at least to me.)