I don't get this scare about AI. Imagine, you are a genius with IQ 200. You have studied hard your entire life and you are now an expert in several fields. One day you decide to conquer the world. Will you succeed or not?
On your specific point, humans take decades to (very roughly) copy themselves. An AI could do it in minutes with far more precision and control.
Assuming you have 200 IQ, if you could make thousands or millions of clones of yourself quite cheaply, you still might not succeed in taking over the world, but it is no longer a laughable idea.
Overall, I’m somewhat ambivalent about the possibility of x-risk, especially if we work reasonably hard to prevent it.
But it shouldn’t be ignored. AI is moving very quickly and it is unclear how powerful its capabilities will be in the next 10-20 years. Of course, there are many other risks presented by AI that we need to stay on top of as well.
> On your specific point, humans take decades to (very roughly) copy themselves. An AI could do it in minutes with far more precision and control.
Copying AIs just results in effectively higher intelligence (bigger brain). Things would be different with self-replicating robots of course, but duplicating robots is not that fast and one could argue that the robots are dangerous rather than the AI controlling them, because they would be dangerous even if controlled by stupid non-AI program.
If you are very smart, chances are you will be able to obtain some power and influence. This is expected of AIs. Such effects can be however undone the same way you can undo effects of an explosion at a refinery (while punishing those responsible). The question is whether intelligence alone lets you expand power without bounds until you conquer the world. I just don't see how that would be possible.
>The question is whether intelligence alone lets you expand power without bounds until you conquer the world. I just don't see how that would be possible.
Name the next most intelligent animal to humans...
Have humans completely and utterly conquered them?
The intelligence explosion already happened and humans were it. Now you're questioning if it's possible with intelligence explosion 2.0?
The intelligence explosion was useful to humans because humans can make more humans and command more resources. An AI can clone itself only on resources that humans make.
We’re F’ed because people can’t imagine more intelligent than 200 iq…
How about 500 iq with a direct line into the entire information retrieval and processing system of 7 billion humans, most of whom are already easily manipulated by relatively rudimentary social media ML.
Imagine you are a jungle monkey. One day some other small hairless monkies want you to go extinct. Will they succeed.
It's not about 200 IQ, it's about when we reach bigger gaps than that. Though yes, I suspect if the 200 IQ guy can copy himself at will, get hardware to do years of 200IQ thinking in an hour, etc. he might succeed.
> One day some other small hairless monkies want you to go extinct. Will they succeed.
The hairless monkeys were embodied and violent and they still took millions of years to dominate. The question is whether intelligence on its own provides sufficient advantage to be immediately dangerous. How can you conquer the world by doing "years of 200IQ thinking in an hour"?
Dude, humans started from scratch, we're trying to design these things to outperform ourselves in every way possible. They're not starting from scratch, robots come out of the factory with the ability to walk, humans never had that advantage, we evolved to do that over a long time.
I'm pretty neutral about it at the moment, but it's not inconceivable that something with an IQ of 400 and can think 30x faster than you, has no need to sleep and has 500x the working memory of you would be dangerous if it didn't like you.
In fact, your comment and others like it make me wonder if people are just in denial because as I said, it doesn't seem impossible?
I've literally heard people claim that things like ChatGPT will soon become "smart enough" to iteratively improve themselves so rapidly that they will become "digital gods".
Right now, LLMs are already rapidly improving themselves through LLM generated code proxied by human engineers. Not a huge stretch to imagine needing less and less engineers to further improve.
I'm not so sure about that, actual Gods wouldn't have poisoned the water, broken the oceans, started wars with each other and the climate and make itself obsolete and potentially obliterate themselves and their children.
Only if the body is duplicated too. If I could just make my brain bigger, I don't think my odds would improve much. Quite to the contrary. An AI with higher dependency on compute power seems more vulnerable.