Part of me wonders if these people are intentionally framing the debate around ethics and potential risks as longer term extinction level problems to distract from the nearer term damage caused by them accelerating the economic inequality of the AI have-nots while they make themselves even richer.
I believe you may be alluding to longtermism[0]. At it's face value, longtermism seems like a good thing, but I've heard many criticisms against it - mainly levied against the billionaire class.
And the criticisms mostly center on what you're saying here - how many billionaires are focusing on fixing problems that are very far off in the future, while ignoring how their actions affect those of the very near future.
This is really less of a criticism of longtermism, and more of a criticism of how billionaires utilize longtermism.
Is it important that we find another planet to live on? Sure, but many will argue that we should be taking steps now to save our current planet.
The more I look at AI, the more I get the feeling that this is true. Spinning an intriguing sci-fi tale of apocalypse and extinction is relatively easy and serves to obfuscate any nearer-term concerns about AI behind a hypothetical that sucks the air out of the room.
That said, I don’t think that it’s necessarily disingenuous so much as it is myopic - to them of course AI is exciting, world-changing, and profitable, but they (willfully or not) fail to see the downsides or upsides for anyone else but them. Perhaps in the minds of the ultra-rich AI proponents, solutions to nearer-term effects of their tech are someone else’s problem, but the “existential risks” are “everyone’s” problem.
The short-term effect is a harbinger of the long-term risk, since capitalism doesn’t inherently care for people who don’t provide economic value. Once superintelligent AI arises, none of us will have value within this system. Even the largest current capital holders will have a hard time holding on to it with an enormous intelligence disadvantage. The logical endpoint is the subjugation or elimination of our species, unless we find a new economic system with human value at its core.
There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.
The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.
> There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.
This is a perfectly reasonable response if nobody is trying to build it.
Given people are trying to build it, what's the expected value from ignoring the problem? E($Damage_i) = P(BadOutcome_i) * $Damage_i.
$Damage can be huge (there are many possible bad outcomes of varying severity and probability, hence the subscript), which means that at the very least we should try to get a good estimate for P(…) so we know which problems are most important. In addition to it being bad to ignore real problems, it is also bad to do a Pascal's Mugging on ourselves just because we accidentally slipped a few decimal points in our initial best-guess, especially as we have finite capacity ourselves to solve problems.
Finally, let's assume you're right, that we're centuries off at least, and that all the superintelligent narrow that AI we've already got some examples of involve things that can't be replicated in any areas that pose any threat. How long would it take to solve alignment? Is that also centuries off? We've been trying to align each other since laws were written like 𒌷𒅗𒄀𒈾 at least, and the only reason I'm not giving an even older example is that this is the oldest known written form to have survived, not that we weren't doing it before then.
> The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.
Nepotism helps, but… huh, TIL that nobody knows who was the grandfather of one of the world's most famous dictators.
Cronyism is a viable alternative for a lot of power-seekers.
So I propose the Musk supremacy criterion to be the following.
Suppose that a wealthy and powerful human (such as Elon Musk) were to suddenly obtain the exact same sinister goals as the hypothetical superintelligent AI in question. Suppose further that this human was able to convince/coerce/bribe another N (say 1000) humans to follow his bidding.
A BadOutcome is said to be MuskSupreme if it could be accomplished by the superintelligent AI, but not by the suddenly-evil Musk and his accomplices.
Obviously[citation needed] it is only the MuskSupreme BadOutcomes we care about. Do there exist any?
For example 1000 people — but only if you get to choose who — is sufficient to take absolute control of both the US congress and the Russian State Duma (or a supermajority of those two plus the Russian Federation Council), which gives them the freedom to pass arbitrary constitutional amendments… so your scenario includes "gets crowned King of the USA and Russia, 90% of the global nuclear arsenal is now their personal property" as something we don't care about.
> As long as AIs don't play golf and don't have fathers, we are quite safe.
Until it becomes 'who you exchange bytes most efficiently with" and all humans are at a disadvantage against a swarm of even bellow average intelligence AGI agents.
Because, as unlikely as it is, if we're discussing risk scenarios for AI getting out of hand. Well then a monolithic superintelligence is just one of the possibilities. What about a swarm of dumb AIs that are nonetheless capable of reasoning and decision making and they become a threat?
That's pretty much what we did. There's no super intelligent monkey in charge. As much as some have tried to pretend, material or otherwise. There's just billions of average intelligence monkeys and we overran all Earth's ecosystems in a matter of centuries. Which is neither trivial nor fully explained yet.
The difference is that we have 100% complete control of these AIs. We can just go into the power grid substation next to the data center and throw the big breaker, and the AI ceases to exist.
When humans developed, we did not displace an external entity that had created us and that had complete power to kill us all in an instant.
Look at the measures that were implemented during covid. Many of them were a lot more extreme than shutting down datacentres, yet they were aimed to mitigate a risk far less than "existential".
That data is in fact orthogonal to my point, for two reasons:
1. When we are talking about wealth and power that actually can influence the quality of the lives of many other people, we are talking about way less than 0.01% of the population. Those people aren't covered in this survey, and even if they were it would be impossible to identify on an axis spanning 0-100%.
2. Your linked article talks about income. People with significant wealth and power frequently have ordinary or below-ordinary income, for tax reasons.
Actually, it will have the opposite effect, at least in the short term.
People who own high value assets (everything from land to the AI) will continue to own them and there will be no opportunities for people to earn their way up (because they can be replaced by AI).
"The logical endpoint is the subjugation or elimination of our species"
Possibly, but it would be by our species (those who own and control the AI) rather than by the AI.
I would venture to say that transhumanism will be the path and goal of the capital class, as that will be a tangible advantage potentially within their grasp.
I suppose then that they would become “homo sapiens sapiens sapiens” or some other similarly hubris laden label, and go on to abandon, dominate or subjugate the filthy hordes of mere Homo sapiens sapiens.
No, they are not. Pretty much everyone in the x-risk community also recognizes the existence of short-term mundane harms as well. The community has been making these predictions for over a decade, long before it was anything other than crazy talk to most people.
Google has a big investment in reducing AI bias (remember Gemini got slammed for being “too woke”). Altman is a big proponent of UBI. Etc.