The point of the supply chain risk designation was not just to have the DoD stop using Anthropic (they could have done that by just cancelling the contract). Their intended effect was to force every company that sells to the US government, no matter how indirectly, to not use Anthropic in any way, which would effectively destroy them because almost every company is in the supply chain (for example my company is https://calaveras.ai/ because we sell to AI companies who in turn sell to DoD).
The military is using Palantir's Maven Smart System, which uses Claude, to identify targets to attack.
From here[1]:
> The targets for Operation Epic Fury were identified with the aid of the National Geospatial-Intelligence Agency’s Maven Smart System, which folds in data from surveillance and intelligence, among other data points, and can lay out the information on a dashboard to support officials in their decision-making.
> Maven, created by Palantir, has been coupled with Anthropic’s Claude, a large language model that can vastly speed up that processing.
And here[2], it's still being used despite being "banned":
> But given the government’s extensive use of the company’s chatbot Claude during its deadly offensive in Iran, it’s clearly having trouble making do without it. As The Washington Post reports, the US military is extensively using Palantir’s Maven Smart System in the conflict, which has had Anthropic’s Claude chatbot integrated since 2024.
> Last week, the Wall Street Journal first reported on the Pentagon’s use of Claude to select attack targets in Iran, hours after the White House announced its ban.
> According to WaPo‘s sources, the system spits out precise location coordinates for missile strikes and prioritizes them by importance. Maven was also used during the US military’s invasion of Venezuela and the kidnapping of its president, Nicolás Maduro.
> Center Command is “heavily using” the Maven system, Navy admiral Liam Hulin told WaPo.
> Military commanders told the newspaper that the military will continue using Anthropic’s tech, regardless of the president ordering them not to, until a viable replacement emerges.
> Fun fact: Palantir is powered entirely by Claude
Haha what, OpenAI has been in bed with them and their models used by them since before Anthropic was even a thing. Claude will just have been picked because they considered it the strongest at the task at that point in time.
The pushback isn't that they use Anthropic, it is that you stated they used it "entirely", which is not true.
Yes Anthropic is a priority model in their ecosystem and they are deeply embedded with both tech and staff, but they are not the one as indicated and sourced in my reply above.
This is not really possible. My guess is that the government is not willing to spend the necessary quantity of money to get e.g. Amazon or Google to divest of Anthropic and stop providing them computing resources.
The point is that if DoD's supply chain restriction does what Hegseth seems to want, all contractors involved with Anthropic would have to divest. That includes Amazon and Google, who are both DoD contractors who provide massive quantities of capital and compute to Anthropic. It's irrelevant that Anthropic provides Claude through Palantir.
I'm not sure that's how the supply chain risk thing works. AFAIK, it has to be part of the supply chain for the products delivered to the DoD to count. I don't think just because Amazon is unrelatedly involved with Anthropic, this forces them to sever that relationship. I'm not sure if Hegseth thinks otherwise, but it's entirely possible that he is wrong or that being wrong is expedient to his threats.
I believe you are correct, but they could still weaponize it by requiring the contractors to document proof of not using Anthropic products and can drag that out as long as they want to.
How would they implement such a policy? Amazon, Google, etc. aren't realistically going to terminate all business with Anthropic based on an informal policy that the DoD won't write down.
Same as they already pressure these companies. Remove access to the admin thus giving them unfavorable terms on other issues compared to their rivals. Tell them as much in private and what they can do to rectify it. That's this admin's whole modus operandi, is it not? There's a reason all the CEOs clamor to go to the relevant WH events.
A CEO's time isn't that valuable. Even if you count an amortized fraction of their total compensation, sending them to a White House event for an evening is orders of magnitude less costly than giving up access to the best software development tools.
I think you have to add in the cost of PR toxicity for being so closely associated with Trump, though. Most of these guys are from the prime liberal subculture of America, even if in private they lean another way. Traditionally they've never emitted so much praise or support for one president over another, but with Trump it seems like the price of entry to get in on e.g. AI discussions around regulation or funding. Musk is arguably a player in the space but wasn't involved due to some falling out with Trump.
No you don’t understand, they can’t accomplish the same by an informal policy.
Both Google and Amazon are government contractors. With the designation, they might have had to divest their positions in Anthropic and be unable to serve their models.
> I'm not sure that's how the supply chain risk thing works. AFAIK, it has to be part of the supply chain for the products delivered to the DoD to count. I don't think just because Amazon is unrelatedly involved with Anthropic, this forces them to sever that relationship. I'm not sure if Hegseth thinks otherwise, but it's entirely possible that he is wrong or that being wrong is expedient to his threats.
This whole event was precipitated by Palantir using Claude in the Maduro raid, and news of this surprising Anthropic and resulting in them asking questions and maybe suggesting in private discussions that they took issue with this and wanted to introduce more posttraining limits on the ways their model was used by the department. This has been widely reported and I don’t think anyone is really disputing that.
If that’s true, then what you’re suggesting is absurd. Because it’s not enough for the pentagon to merely stop contracting with Claude, because that was never the problem in the first place from their risk model. Their problem was they had a prime contract with Palantir for their wargaming service, and Palantir subcontracted with Anthropic as an LLM provider. So if DoD ceased to contract with Anthropic directly, it would have no impact on the risk that Anthropics new posttraining limits potentially posed to their mission insofar as they are reliant on Palantir and it’s services and there would be nothing preventing Palantir from continuing to contract with Anthropic.
I have to ask, what other tool do you think they have to protect themselves from this? You can argue that these guardrails from Anthropic are useful and important and DoD should just accept that, and that’s fine, but it really is (and ought to be) the departments decision about whether they’re comfortable with that or not. It’s their call. They have access to information on our adversaries that the public doesn’t. And they’re the ones responsible when lives are lost. And if they’re not comfortable with trusting service member lives to a specific post trained Opus 4.6 model, I’m not sure what other avenue they have to solve that problem across their entire prime contracting space other than a supply chain risk designation.
Any sort of backroom dealings where they whisper off the record to defense CTOs that they have a problem with anthropics leadership and would prefer that they sub out to OpenAI or Gemini instead for LLM services would be totally illegal and a violation of procurement law. So they definitely can’t do that. A supply chain risk designation is the only real tool they have to single out a single company.
One thing worth noting: Anthropic is a PBC, which is a new corporate structure that makes it relatively unaccountable to traditional profit motives. But those traditional profit motives are precisely the carrot that the DoD relies on dangling in front of the industry to motivate companies toward its mission. Traditional for profit companies are lead by people who have a fiduciary responsibility to maximize profit by serving the government. The entire procurement process relies on companies being motivated by profit and competing through bids. But PBCs are specifically designed to remove that incentive structure from their decision making, which makes them entirely unalike every other defense contractor which is publicly traded and can be held legally responsible by shareholders for putting personal beliefs above increasing shareholder value. That sounds like… exactly the kind of thing you don’t want in your military supply chain.
> Any sort of backroom dealings where they whisper off the record to defense CTOs that they have a problem with anthropics leadership and would prefer that they sub out to OpenAI or Gemini instead for LLM services would be totally illegal and a violation of procurement law. So they definitely can’t do that.
It doesn't seem they'd be subject to any kind of effective enforcement to me
The entire article talks about “guessing” the bucket name as being the attack enabler, not the leaking of it. What does the landscape look like once you start doing the basics like hashing your bucket names? Is this still a problem worth engineering for?
This seems like a really poorly thought out article. You should take more care on making sure your understanding is correct before publishing in the future.
Taking the Amazon example in Part 2:
For e-books (simpler), Amazon gets 30% for running the store, doing advertising, etc. and then authors get 70% [1].
For print books, I'm a little less clear but it appears Amazon buys the books for roughly 50% of list[2] which for Hachette in 2025 is $26.50 so Amazon pays $13.25 to the publisher and then Amazon retails the book for $14.84. So for $100 of books sold on Amazon, $89 goes to the publisher and $11 goes to Amazon. It appears that the cost to produce these books is maybe $2/book (though I'm very unsure on this, this is a guesstimate from public data) and then the rest flows back to authors, advances, etc.
Amazon.com (not AWS) has a 7% profit margin in North America (FY25), so of that $11 they get in revenue they get $0.77 in operating profit.
Ok and this also annoyed me: you say $1.7T/y is $10.5k/worker, which is accurate. but then you say for the average household it's $26k/y. This is not true. There are 134m households in the US [3] so it's $12.6k/y for the average household. Maybe you meant something else like the median household but it seems more likely you just said ~2.6 people/household and multiplied the number of people/household by cost/worker. This is obviously wrong and you should have caught errors like that earlier.
I think Cosmo's refutations were mostly not very useful and based on misunderstandings of what I was trying to say. This is fine and we discussed it prior to their article being published.
The point I was trying to make with "RL is only necessary once" is that you can embark on a single self-play loop getting better and better, and this will get you to something close to the frontier. Once you're at the frontier, the frontier doesn't move very much, so you have quite a while (decade?) where it's totally fine to distill from the RL games.
On correction histories -- imo I correctly described what they do. Cosmo was annoyed by the word "adapt" but what I described was the adaptation.
On SPSA -- you don't have a gradient! you don't do backprop! this is what i was trying to get at.
Iceland is a tiny country with unusual amounts of energy. Not all renewable sources are the same -- hydropower is fairly reliable too, for example -- but Iceland is just not a useful example for the whole world. The largest geothermal plant in the world by far is in California, but it's a small portion of our total energy use so no one cares. https://en.wikipedia.org/wiki/The_Geysers
You can locate an aluminum plant pretty much anywhere you want, as the energy required to make aluminum is large compared to the cost of mining/shipping bauxite. This solves the main problem with geothermal, which is that it's in random locations around the world that don't necessarily have many people living there.
Any place with significant volcanic activity (e.g. Hawaii) could probably do geothermal power if they wanted to.
Hawaii did do geothermal, but in fact it's so geothermically active their main geothermal plant went offline for a while because lava got shot up their boreholes https://en.wikipedia.org/wiki/Puna_Geothermal_Venture
I'm not really sure but my recollection from talking to them in 2019 was that it was quite difficult to get features shipped because of e.g. hacking risk.
It's certainly true that iOS's strict sandboxing and aggressive resource management probably made life harder for them, but that doesn't excuse the lack of deep integration for 1p automation. That's the kind of stuff AppleScript allowed two decades prior without any background runtime.
taking into account all the impacts on society, uber is a substantial improvement on what came before. sometimes laws are bad and it is good when you break them
reply