A new M4 Air is now $799 at Amazon, and a new M1 Air is $599 at Walmart. So it's not like $999 is really the starting price if you spent a minute to search outside of Apple's Online Store.
This rounded corner change feels very off. Since Apple has that same radius across all its products (software and hardware), it could be signaling a broader upcoming shift in their hardware, perhaps driven by industrial design needs for future AR/VR/MR glasses.
Not OpenAI, but Anthropic CPO Mike Krieger said in response to a question of how much of Claude Code is written by Claude Code: "At this point, I would be shocked if it wasn't 95% plus. I'd have to ask Boris and the other tech leads on there."
> During take-home assessments Complete these without Claude unless we indicate otherwise. We’d like to assess your unique skills and strengths. We'll be clear when AI is allowed (example: "You may use Claude for this coding challenge").
> During live interviews This is all you–no AI assistance unless we indicate otherwise. We’re curious to see how you think through problems in real time. If you require any accommodations for your interviews, please let your recruiter know early in the process.
He'd have to ask yet did not ask? A CPO of an AI company?
TFA says "How Anthropic uses AI to write 90-95% of code for some products and the surprising new bottlenecks this creates".
for some products.
If it were 95% of anything useful, Anthropic would not still have >1000 employees, and the rest of the economy would be collapsing, and governments would be taking some kind of action.
> If it were 95% of anything useful, Anthropic would not still have >1000 employees
I think firing people does not come as a logical conclusion of 95% of code being written by Claude Code. There is a big difference between AI autonomously writing code and developers just finding it easier to prompt changes rather than typing them manually.
In one case, you have an automated software engineer, and may be able to reduce your headcount. In the other, developers may just be slightly more productive or even just enjoy writing code using AI more, but the coding is still very much driven by the developers themselves. I think right now Claude Code shows signs of (1) for simple cases, but mostly falls into the (2) bucket.
I don't doubt it, especially when you have an organization that is focused on building the most effective tooling possible. I'd imagine that they use AI even when it isn't the most optimal, because they are trying to build experiences that will allow everyone else to do the same.
So let's take it on face value and say 95% is written by AI. When you free one bottleneck you expose the next. You still need developers to review it to make sure it's doing the right thing. You still need developers to be able to translate the business context into instructions that make the right product. You have to engage with the product. You need to architect the system - the context windows mean that the tasks can't just be handed off to AI.
So, The role of the programmer changes - you still need technical competence, but to serve the judgement calls of "what is right for the product?" Perhaps there's a world where developers and product management merges, but I think we will still need the people.
Been using claude code almost daily for over a month. It is the smartest junior developer I've ever seen; it can spew high-quality advanced code and with the same confidence, spew utter garbage or over-engineered crap; it can confidently tell you a task is done and passing tests, with glaring bugs in it; it can happily introduce security bugs if it's a shurtcut to finish something. And sometimes, will just tell you "not gonna do it, it takes too much time, so here's a todo comment". In short, it requires constant supervision and careful code review - you still need experienced developers for this.
Weasel words. No different than Nadella claiming 50%.
When you drill in you find out the real claims distill into something like "95% of the code, in some of the projects, was written by humans who sometimes use AI in their coding tasks."
If they don't produce data, show the study or other compelling examples, don't believe the claims; it's just marketing and marketing can never be trusted because marketing is inherently manipulative.
It could be true, the primary issue here is that it's the wrong metric. I mean you could write 100% of your code with AI if you were basically telling it exactly what to write...
If we assume it isn't a lie, then given current AI capabilities we should assume that AI isn't being used in a maximally efficient way.
However, developer efficiency isn't the only metric a company like Anthropic would care about, after all they're trying to build the best coding assistant with Claude Code. So for them understanding the failure cases, and the prompting need to recover from those failures is likely more important than just lines of code their developers are producing per hour.
So my guess (assuming the claim is true) is that Anthropic are forcing their employees to use Claude Code to write as much code as possible to collect data on how to improve it.
This is classic marketing speak. Plant the idea of 95+% while in actuality this guy doesn't make any hard claims about the percentage. It can just as well be 0 or 5%.
It’s worth pointing out that the statement is about how much of Claude Code is written with it and not how much of the codebase of the whole company. In the more critical parts of the codebase where bugs can cause bigger problems, I expect a lot less code to be fully AI generated.
Standard CxO mentality. “I think the facts about our product might be true but I won’t say it because the shareholders and SEC will hang me when they find out it’s bullshit.” Then defer to next monkey in circus. By which time the tech press, which seems to have a serious problem with literacy and honesty (gotta get those clicks) extrapolates it for them. Then analysts summarise those things as projections. Urgh.
The other tactic is saying two unrelated things in a sentence and hoping you think it’s causal, not a fuck up and some marketing at the same time.
In the year 2025 the primary job function of all C-level execs is marketing. Which is to say, he probably doesn't know the actual number, doesn't care, and is just saying what he knows the "right" answer should be.
This guy is so full of shit. Anthropic’s leadership are all talk and hype at this point. And they’re not the only ones guilty of this in this hype cycle by far.
I don't really think Meta ever had a vision beyond "Facebook is a social network to connect people". Since then, their strategy has primarily been driven by their fear of being left behind, or of losing the next platform war. Instagram, Whatsapp, Threads, VR, AR, and now AI, they all weren't driven by a vision as much as it was their fear of someone else opening a door to a new market that renders them obsolete. They are good at executing and capturing the first wins, but not at innovating, redefining a market, or pushing the frontier forward; which is why they eventually get stuck, lose direction, and fall behind (Tiktok, Apple Vision Pro, AI).
Yes, but they’ve definitely made a big contribution to AI / LLMs. I just don’t understand how they plan on monetizing upon things, apart from “better AI integration inside their own products”.
Are they planning to launch a ChatGPT competitor?
It seems like this acquisition is focused on technology, but what’s the product vision?
Who will be responsible for figuring out what AI features to build? I think it is reasonable to look into it, seriously, with the point of view of "can we disrupt ourselves before being disrupted." This doesn't mean putting a significant engineering team behind this, but, to put a significant effort in figuring out what is it you could build, and what an ROI on that would be.
You've two outcomes from this, either you do find a disruptive AI angle and move a sufficiently-large part of your team to it, or you don't, but figure out a minimal effort that would satisfy the "investor positioning angle". The third option, to do nothing or aggressively push back against AI and the CEO's desire, would potentially yield to no Series C or a down-round, which is something that you, your CEO, and your customers would not like.
For each of the 2024 7 swing states, the winner was <1% ahead on average, so what good are these polls if the results are going to be within their margin of error?
They need to either find a more accurate way, or... give up!
What they're good for is telling you that things are close. A tied poll or a 50-50 model can tell you that if your beliefs think it's 99% to go one way, you're probably overconfident, and should be more prepared for it to go the other way.
I cared about the result, because it was going to decide whether I settled down in the US or whether I wanted to find a different place to live. And because I paid attention to those polls, I knew that what happened was not particularly unlikely. I prepared early.
A lot of people I know thought it couldn't happen. They ignored the evidence in front of them, because it was distasteful to them (just as it was to me). And they were caught flat-footed in a way that I wasn't.
That's not the benefit of hindsight: I brought receipts. You can see the 5,000 equally-likely outcomes I had at the start of the night (and how they evolved as I added the vote coming in) here: https://docs.google.com/spreadsheets/d/11nn9y9fusd-6LQKCof3_... .
We had a pretty weird year in general. Harris did bad across most safe states but seemed to do much better than her average in swing states (not enough to win them, but much better than she did in non-competitive states)
Many election models rely heavily on historical correlation. States like OH and IN might vote quite differently but their swings tend to be in the same direction.
The weirdness this year (possibly caused by the Harris campaign having a particularly strong ground game in swing states) definitely challenged a lot of baked in assumptions of forecasts.
I see this as a combination of three forces at play: AI, WFH, and Skillset--all adding downward pressure to hiring talent in the U.S.:
1) While A.I. may now be only adding 10-20% of productivity gains, the rapid pace of improvement leaves open the possibility that tha gains can be soon much more than that. So, instead of scaling your company now, if you can afford to, wait out a bit and see where this goes.
2) Even though much of BigTech is clawing back WFH, startups aren't as much. And once you introduce WFH to your culture and processes, it is hard to reason with the idea that you should pay $200K/year for an engineer when it can cost you a fraction (possibly 20-50% of that) to hire them remotely from another country, when also nowadays most of these remote employees are more than willing to work in EST/PST timezones. This used to be the case before COVID, but now many more startups have accepted and adapted to the idea of WFH.
3) While advanced skillsets and deep experience is necessary in many (but not most) startups, and while these skills are more difficult to find in India or Pakistan, the reality is, for many, many tech companies, most of the work doesn't require top-notch skills. You don't need a top 99% percentile in frontend engineering skills for a 1-year-old "name whatever category" app. And with the recent rise of focus on profitability, frugality, and the difficulty in fund-raising, being cognizant of cost per talent is now a thing.
I think Elon and Vivek's comments are more nuanced than they are taken. Elon, given he's at the cutting edge of engineering, must be having difficulty hiring top-99.9%-percentile talent against BigTech, and wants to open the pool of these types of talent from elsewhere. I don't think he wants H1Bs for React Native engineers. I am interpreting his comments as "I want to suck-in all A.I. researchers into America".
H1B has been around for a while now. It can't take more than a moment of original research to realize it's vastly used for junior roles & a large percentage of consulting outsourcing houses who charge much, pay little and deliver nothing.
| I think Elon and Vivek's comments are more nuanced than they are taken.
If they are, they have the platform to provide that nuance. Take a look at the public H1B data for Tesla (disclaimer it doesn't tell the full story), it does not seem like they are vying for the top-99.9%.
It seems odd we're giving billionaires the benefit of the doubt.
They are positioning themselves to win, and that's totally fine in the system we're in, but let's not assume they are friends of the working class.
> 3) While advanced skillsets and deep experience is necessary in many (but not most) startups, and while these skills are more difficult to find in India or Pakistan, the reality is, for many, many tech companies, most of the work doesn't require top-notch skills. You don't need a top 99% percentile in frontend engineering skills for a 1-year-old "name whatever category" app. And with the recent rise of focus on profitability, frugality, and the difficulty in fund-raising, being cognizant of cost per talent is now a thing.
a. Note that "outside of the US" covers more than India and Pakistan. Google, Microsoft, Meta, etc. all have sizeable research or R&D centers in France, Germany, Switzerland, Ireland, UK, etc. Most of these countries have engineers of a level comparable (better by some metrics, worse by others) to US engineers.
b. I've known several top-notch programmers from India. One of them is an important contributor to the Linux kernel, another to the core of Firefox. I have no clue how common that is, but be wary of stereotypes.
Tesla wasn't paying as much as the big tech companies, which meant he didn't have access to that top 1%. By opening the door to more H-1B visas, he could ideally flood the market with international candidates and attract higher skills at a lower cost.
While this approach is self-serving, it makes sense. He could acquire that top talent today if he was willing to pay for it—people would leave their current jobs for a pay upgrade. But he's not willing to do that. So, he needs more candidates.
If someone is good then they are able to compete for more highly paid positions and therefore aren't working for 20% of the salary.
So in the end you shoot yourself in the foot, especially in startups where crappy code leads your team to work at a snails pace as your code becomes a spaghetti tangled mess. Then, once it does you end up hiring the expensive guys to come in as consultants to try to get back to what you could have avoided in the first place. Then you have to hope that in the meantime you haven't had any major security issues...
> Well, there are quite a lot of rumors and stigma surrounding COBOL. This intrigued me to find out more about this language, which is best done with some sort of project, in my opinion. You heard right - I had no prior COBOL experience going into this.
I hope they'd write an article about any insights they gained. Like them, I hear of these rumors and stigma, and would be intrigued to learn what a new person to COBOL encountered while implementing this rather complex first project.
> One of the rumoured stigma is that the object-oriented flavour of COBOL goes by the unwieldy name of ADD ONE TO COBOL YIELDING COBOL.
Which is a joke. Rather than an extension, the COBOL standard itself incorporates OO support, since COBOL 2002. The COBOL standards committee began work on the object-oriented features in the early 1990s, and by the mid-1990s some vendors (Micro Focus, Fujitsu, IBM) were already shipping OO support based on drafts of the COBOL 2002 standard. Unfortunately, one problem with all the COBOL standards since COBOL 85 (2002, 2014 and 2023), is no vendor ever fully implements them. In part that is due to lack of market demand, in part it is because NIST stopped funding its freely available test suite after COBOL 85, which removed a lot of the pressure on vendors to conform to the standard.
Algol 68 actually isn't too bad of a language to work with, and there's a modern interpreter easily available. Unfortunately it lacks all support for reading and manipulating binary data so I think a Minecraft server would be nearly impossible.
And then there is the whole DoD security assessment of Multics versus UNIX, where PL/I did play a major role versus C, so the compiler did work correctly enough.
Just this week we're discussing a VC++ miscompilation on Reddit.
IBM are still building and maintaining their PL/I compiler for z/OS, today. Though it is only compliant with specs up to 1979. The '87 ISO is only partially adopted.
I get the distinct feeling it's been a long time since IBM wrote PL/I compilers considering anyone but IBM. So 'correct' here might be 'what IBM needs'. YMMV.
The title and discussion in the linked article describe this as "worse after firmware update", but the title here suggests Apple removed the feature entirely.
On my end, I've noticed I have recently been checking whether it was on or not. So, I am one more person experiencing some issue here.
Having been involved in such projects on the corporate level, I do agree that driving efficiency and reducing costs, even when everyone wants and tries to do them, is rather difficult, and very slow to implement. However, once you go down the track and commit to it, a major benefit is it raises the awareness of the importance of frugality and efficiency, and future projects do become more efficient. So maybe if you cannot make a huge dent like $2T, it could help slow down the increase in deficit.
While true for the most part on spending, the deficit arises because we lack a rule requiring revenue to match spending, and we can't agree on what constitutes a fair balance of responsibility for that revenue.