Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
“Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.
Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.
With respect, I don't think you've used the latest models and have not seen Anthropic's enterprise revenue hockey-stick like number. They are so busy outfitting fortune-500, you can't even get someone in sales to respond to emails. I've been waiting for months and so have others.
This will sound snarky, so forgive me, but I honestly don't know the answer. Is this actually true? Is there a reliable source containing statistics on LLM compute usage that includes training vs inference for the whole market?
I don’t understand why people don’t just use Gemini or some other AI web search to get an answer to these kinds of questions quickly (I excluded the sources, you can get them if you ask the same question).
> While AI training is often the most intense and expensive process for a single model, the majority of total AI compute usage (approximately 90%) is used for inference.
> Here is the breakdown of why this is the case:
> Inference as High-Volume
> Activity: Inference occurs every time a user interacts with an AI model (e.g., asking ChatGPT a question, using image recognition, or generating code). While a model is trained once (or updated infrequently), it runs millions or billions of inferences continuously.
> Cost Scaling: Training is a massive, one-time upfront cost, while inference is an ongoing, daily operational cost. As the number of AI users grows, the demand for inference compute scales faster than the need for training new, large models.
> The Shift to Efficiency: While early AI hype focused on the immense compute needed for training, the industry has shifted toward making inference cheaper and faster through specialized hardware and techniques like optimization, quantization, and small language models (SLMs).
And I finally figured out how to get links to answers instead of just inlining the content as before. Anyways, there it is. We live in a time where questions like "Does inference or training use more compute?" can be answered quickly by just pasting it into a search box.
The revenue numbers are public for the major AI companies. That's probably the best estimate for "inference for the whole market" we have, since most of that inference is billed in either API usage or subscriptions, and it won't include any in-house usage such as training.
Google has enough money, still has positive revenue and still invests in AI + Deepmind.
Google doesn't need to do anything to make any other numbers work.
Gemini 3.1 pro is really good; Meta just signed a deal with Google for their TPUs.
Nano Banana 2 Pro is alsy very good.
OpenAI numbers might not add up, Antrophic might burn through cash, but not google.
And it doesn't matter anyway because as long as google can afford it, Microsoft HAS TO do this too and Microsoft also can afford it. The same with Amazon.
Microsoft invests in OpenAI and Amazon invests in Antrophic.
Amazon isn’t broke — but their AI ambitions are now bigger than even a highly profitable company can self-fund. That’s not a nothing-burger. When you’re structurally dependent on debt markets to execute your core strategy, ‘we make plenty of profit’ stops being a complete answer. Will all things AI looking a bit shaky at the moment, taking out massive loans to keep running faster is raising eyebrows.
But if you look at their numbers, they still can pay it back in a short period of time. Like 1-3 years.
And their core business is super stable even with AI.
So the only real risk is increasing operational costs but if they pay of the investment, they literlay could just stop the hardware from running and reduce the operational cost if there is no demand
"Not surprisingly companies are willing to get into bed with more and more questionable use cases…"
But not all companies as we have seen over the last week or so.
Irregardless, all companies doing so will have to balance the ethics of their choices against the public perception of their company as all of us are free to make choices that align with our own personal ethics.
(In short, they don't get to hide behind "everyone else is doing it".)
Questionable use cases like hyperscalers housing confidential data of military operations? Use case is the same, private companies supporting military operations, as they have for ages.
Considering the DOD's stance on using AI for questionable means with Anthropic, we can very safely assume it.
This is further compounded by the fact the DOD, and the administration at large, is headed by some of the most incompetent and morally bankrupt individuals imaginable. I wouldn't trust Hegseth to change a goddamn light bulb, let alone run the DOD.
I certainly prefer the US administration's morals over those of the Islamic Republic whom this war is against. I do not perceive the US administration to be either incompetent nor morally bankrupt as you say.
I do think those who bully those who defend the US online are the morally bankrupt ones.
No, there are targeting attacks online aimed at discrediting US institutions. People are afraid of defending the US and Israel online due to downvoting brigades.
Sounds sketchy as hell but the article suggests its for unclassified work, like "drafting meeting notes, creating action items, and breaking large projects into step-by-step plans".
I think I'd be more annoyed if my government weren't using tools to make BS work more efficient.
>The DOD’s workforce of more than 3 million people will now be able to use a no-code or low-code tool called Agent Designer to create their own digital assistants for repetitive administrative tasks.
As someone who moved from software companies to IT management, seeing this move to fully embrace 'everything in Excel' or basically undefined business use cases/processes moved into software ad hoc and without validation, it's going to be interesting to see how this plays out. Especially for companies that have outsourced IT and expect software to be defined/tested out business processes in supported systems.
In house IT is going to be huge in a couple of years sorting out this mess. I would have never guessed the future would be all custom Excel spreadsheets, but instead of Excel just random code in random languages with random data stores.
So the problem is filling out forms is too onerous, but rather than fix the process, create a device that fills the form with slop and then another device that approves or rejects the slop form.
I could have sworn I signed up for the other future-the one without quite this much stupid.
Everyone’s scared that it would be used for war but how would they break the alignment on llm models? They don’t even allow me to generate black people on AI. How the hell will it work for war related tasks? Or would there be a separate model fine tuned for government that allows being used to kill people?
You don’t say “find people to kill and kill them” you say, “given this list of locations, which ones could be harboring terrorists or hidden military bases?” Etc. Or even more abstract constructs based on domain aliases where AI assists in pattern matching and automation but isn’t really thinking in terms of moral domains.
War is a racket. It always has been. It is possibly the oldest, easily the most profitable, surely the most vicious. It is the only one international in scope. It is the only one in which the profits are reckoned in dollars and the losses in lives. A racket is best described, I believe, as something that is not what it seems to the majority of the people. Only a small "inside" group knows what it is about. It is conducted for the benefit of the very few, at the expense of the very many. Out of war a few people make huge fortunes - Smedley D. Butler
Theory: selling half-baked AI options to the government is plan B. It's an alternative to bailing out these financially failing AI companies. This is a delay tactic to prevent a collapse scenario.
This should surprise no one. A CIA-backed VC was one of the first investors of Google. Big tech will always serve the powers that be. Employees that think their letters of appeal will do anything live in a fantasy land. That’s not how the real world works.
Engineering Ethics is a standard required class in any engineering discipline and a whole field of discussion. The ethics of working on military stuff (or even just government stuff) is nowhere near as cut and dry as your question seems to imply.
For example:
- What if the country asked you to develop technology to track and hack journalists or political rivals the administration doesn't like?
- What if the country asked you to develop chemical weapons? Is it different if the weapons would be used on their own population or only on external "enemies"?
- What if the country asked you to personally assassinate a civilian of another country? What if they asked you to create a program that would do that? What if they asked you to simply create a list of targets, and you knew they'd be assassinated?
- What if the country asked you to build something in an unsafe way that you're pretty certain will cause harm to people?
- What if the country asked you to make a public statement lying about the purpose behind what you're building?
The country in question is the United States of America. You know, the one that Iranian Islamic Republic officials lead chants of "Death to America" about.
The US is not perfect, but this disparagement of the US for the benefit of the Islamic Republic is disgusting. As is the online bullying of people who stand up for the US.
Just because there one or maybe several bad/worse countries in the world, that doesn't mean anything goes ethically.
That's a dangerous line of reasoning.
If (IF!) the U.S. government is a corrupt authoritarian regime does it matter what services Google was providing?
When is the point we see that boycotting these companies that are helping kill, lets say 100 little girls with a tomahawk missiles, is the very least we can do?
"“We’re starting with unclassified because that’s where most of the users are, and then we’ll get to classified and top secret,” Michael said in an interview, adding that talks with Google over using the agents on the classified cloud are underway."
You're making many bold assumptions in this single sarcastic post, and they are a credit to your optimistic belief in the candidness of both man and AI.
Can you name even a single large company that wasn't created by the state? And yes, maybe created means "picked up a tiny company and made it big", I'm treating that as the same (ie. Amazon)
Also the whole internet started as a military project. The big reason, especially when it comes to Silicon Valley's tech is that people just don't want it until they can see what it does.
Well... We're kind of saying the same thing, I just said it from another perspective. I meant to say that the military created it, so the military will stay around to reap the dividends.
I don't know exactly how I would feel if the software I created selected a school to bomb and then suggested bombing the rescue parties trying to find / save any unexploded children 40 minutes later (double tap strategy to kill rescue parties and/or medics).
That 'let claude wing it, then send for review' approach that your lazy coworker uses is now how the largest military in the world operates. No big drama.
Do you support the current Iran war and the way its handled?
Opposition and critisism (normally done by the independent press and the party not in power) is there to align. With trump you have 'deals' of rich people doing other rich people favours. They do not care about human lives
> 9/11 was one reaction of the middle east to americas behaviour and it will be interesting to see what terrorism will happen in the USA after this war.
Thank you for being so clear about which side you support. I stand with America and hope to see you terrorists and terrorist supporters rendered no longer a threat.
That your simple view of this war doesn't align with my more broader view of the war?
But to be very very clear: fu USA. I assumed, as a young perosn that we germany and USA were allies (you know ww2 etc.) and then I learned how USA sees itself as the world police.
And how the USA controls mmilitary stations around the world, how the USA CIA was responsible for the drug lords in Mexico ,how the USA overthrew Iran and replaced it with a USA friendly person on purpose.
How the FBI? or CIA created the Una-bomber.
How the USA threw nuclear weapons onto Japan (i was there, i saw what that did).
And now how the USA throws a tantrum by voting in the most narcisistic president in the world and just fucks over the world.
Feel free to stand with America, i do not. I do not stand with Iran either though and especially not with Israel.
At this point, the employees at Google who signed that open letter might as well call it quits and leave. Google already has military contracts the Pentagon previously, so this is not surprising at all.
Let's all boycott Google folks, I want all of HN to band together and in solidarity just not use Google for anything...
Let's see if anyone here has the guts to even switch away from GCP, scratch that can folks even move away from Apple(Apple pays for Gemini too) and Android?
I do think OpenAI deserves the boycott but people talking about Anthropic as they were taking some kind of ethical stand when it was just ego tripping for everyone involved is insane.
Not surprisingly companies are willing to get into bed with more and more questionable use cases if it helps show some desperately needed AI adoption revenue.
reply