Well, there is not much to say about it and that is the crazy part. An AI autonomously comment society and it is a non event. Soon they might give birth and leave earth and we will be like: "so what?".
It is the same pattern, late on VR, late on AI. Those two tech have a pricing problem. I would guess that Apple is working to create the conditions to make these tech cheap enough to sell it to everyone.
Deleting the partition is a good strategy to commit yourself. It might take some effort to get back to you productivity (and autonomy) levels, but then you will exceed them.
I like the chess analogy as it answer the question: why can't i see those gain?
To adress your point, let's try another one analogy. Imagine secreterial assistants, discussing their risk of been replaced by computers in the 80s. They would think: someone still need to type those letters, sit next to that phone and make those appointments, I am safe. Computers won't replace me.
It is not that AI will do all of your tasks and replace you. It is that your role as a specialist in software development won't be necessary most of the time (someone will do that, and that person won't call themselves a programmer).
Secretarial assistant as a profession is still very alive, and the title has been inflated to stratospheric heights (and compensation): "Chief of Staff"
If it worked, I would agree. AI just helped me finding bugs in some hashing function today, ok, nice. But only after 3 hours I got any result out of it, with 13 years of experience.
My feeling is that newbies are creating todo lists with react , just like it has been copied from someone's tutorial they didnt bother to read before, and now they feel powerful. But hey, let them do our taxes then! And they get screwed in 0 seconds.
I imagine if they tried to replace typists with keyboards that produced plausible looking words that were entirely wrong half the time then we'd probably still have plenty of typists.
I tend to find that the volume of automation predictions inversely correlates to how real they are.
When capitalists actually have the automation tech they dont shout about it they just do it quietly and collect the profits.
When, say, Bezos is worried about his unionizing workforce and wants to intimidate - that's when the hot takes and splashy media articles about billions invested in automation "coming for yer jerb" you read about get published.
Both be true at the same time: some teams spend a fortune on AI and the AI investments won't get the expected ROI (bubble collapse). What is sure is that a lot of capacity is been built and that capacity won't disappear.
What I could see happening in your scenario is the company suffers from diminishing return as every task becomes more expensive (new feature, debugging session, library update, refactoring, security audit, rollouts, infra cost). They could also end up with an incoherent gigantic product that doesn't make sense to their customer.
Both pitfall are avoidable, but they require focus and attention to detail. Things we still need humans for.
> What is sure is that a lot of capacity is been built and that capacity won't disappear.
They really are subsidizing what will be an incredibly healthy used server equipment market in a year or two. Can’t wait. My homelab is going to be due for an upgrade.
Your response contains a performative contradiction: you are asserting that humans are naturally logical while simultaneously committing several logical errors to defend that claim.
commenter’s specific claim—that adding a note about the definition of "if" would solve the problem—is a moving the goalposts fallacy and a tautology. The comment also suffers from hasty generalization (in their experience the test isn't hard) and special pleading (double standard for LLM and humans).
When someone tells you "you can have this if you pay me", they don't mean "you can also have it if you don't pay". They are implicitly but clearly indicating you gotta pay.
It's as simple as that. In common use, "if x then y" frequently implies "if not x then not y". Pretending that it's some sort of a cognitive defect to interpret it this way is silly.
> Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., “not bad” represented as “good”)[...]
From: Negation mitigates rather than inverts the neural representations of adjectives
The facts are in the PISA data collected by the OECD. If you drill down by subpopulation, the majority group in the U.S. goes toe to toe with the majority groups in Asian countries, and beats the majority groups in western european countries: https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
National competitiveness and distributional equity don’t go hand in hand. China has made tremendous achievements by focusing investment on key provinces instead of trying to bring everyone up together.
reply