Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I show my 19 month old daughter like three cartoon drawings of owls and she recognises a live one at the bird park instantly, unprompted. We have a way to go.


I believe cartoons are our equivalent of adversarial images. They typically look nothing like (photos of) their namesake and yet we recognise them usually without prompting.


It is my understanding (although I sure don't have any evidence on me) that cartoons and such (at least, the ones where we haven't simply learned that this cartoon means this animal) work by being a picture of what we remember about an animal. Akin to a caricature; the cartoon contains the most salient features. It doesn't work by looking like the actual animal; it works by reacting with how we remember the animal.


Isn't that kind of the same thing? Adversarial examples work by matching what the neural net 'remembers' about the target classification, rather than being a picture of a thing in that class. Neural nets just find different features salient .

I've wondered in the past if we could use black box adversarial methods with Mechanical Turk to generate adversarial examples that work on humans. Maybe they'd end up looking like cartoons?

(Also agreed, some cartoon animals are just informed likeness - for instance Goofy doesn't look anything like a dog, at least to me.)


>> Akin to a caricature; the cartoon contains the most salient features

The question is - how do we know what are the salient features? How do we figure out that if we make _this_ drawing, it will "remind of" an owl, and if we make _that_ drawing it will "remind of" a dog (or not, as the case may be)? I mean, if we knew that, how humans extract salient or relevant features from their environment, we'd be way ahead on the path to AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: