Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Actually, NNs (specifically auto-encoders) are more like a nonlinear PCA. PCA can be seen as an "information bottleneck" where you try to represent an N dimensional space as a K < N dimensional space such that your ability to reconstruct the original space is maximized (that is, the "reconstruction error" is minimized.)

Autoencoders are precisely the same idea, except that you use nonlinear transformations (e.g. softmax or tanh) in addition to linear transformations (matrix multiplications). You can also use multiple layers in an autoencoder, just like any neural net. (The linearness of PCA makes multiple layers useless. You can always replace multiple layers with a single layer.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: