Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I believe our divergence is somewhere under the definition of 'intelligence'. The word is very ill-defined, to the point that we are reduced to describing it functionally in many cases, as with the Turing test.

>Human intelligence is the intellectual capacity of humans, which is characterized by perception, consciousness, self-awareness, and volition.

That's a very normal definition of intelligence. AI is defined (among many other way, naturally) as "the study of intelligent agents"; 'agency' is a requirement for an artificial intelligence.

Since we started with the context of 'paperclip maximizers', I've been talking about intelligence in that context - in order to be studied as an intelligence, a thing must have 'agency', the ability to intentionally act. A thing that doesn't have agency can still have behavior, and that behavior can still be studied, but the behavior is emergent, not intentional - the system does not have agency unless it is capable of having goals, and of making decisions to achieve or progress toward them.

In particular, capitalism isn't 'trying to maximize capital' - that's an effect it's (supposedly) having, but not an intentional one. It's a pretty clear emergent effect - if many of the actors in a system are trying to maximize their personal capital, then the system turning out to maximize net capital should surprise nobody. It's very equivalent to calling a diffusion chamber a 'maximizer' of entropy - a diffusion chamber does maximize entropy, but by calling it a 'maximizer' (in the context of 'paperclip maximizer'), you would ascribe agency to the chamber - it's not trying to maximize entropy, that's just something that it happens to do.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: