Hacker Timesnew | past | comments | ask | show | jobs | submit | HeavyStorm's commentslogin

You might be right, but that's terrible

Like this, for sure not. And Sam has not, even with that article, done anything to warrant violence.

He killed a few tens of thousands of people, doesn't that count?

The real issue is expecting an LLM to be deterministic when it's not.

Language models are deterministic unless you add random input. Most inference tools add random input (the seed value) because it makes for a more interesting user experience, but that is not a fundamental property of LLMs. I suspect determinism is not the issue you mean to highlight.

Sort of. They are deterministic in the same way that flipping a coin is deterministic - predictable in principle, in practice too chaotic. Yes, you get the same predicted token every time for a given context. But why that token and not a different one? Too many factors to reliably abstract.

>Yes, you get the same predicted token every time for a given context. But why that token and not a different one? Too many factors to reliably abstract.

Fixed input-to-output mapping is determinism. Prompt instability is not determinism by any definition of this word. Too many people confuse the two for some reason. Also, determinism is a pretty niche thing that is only necessary for reproducibility, and prompt instability/unpredictability is irrelevant for practical usage, for the same reason as in humans - if the model or human misunderstands the input, you keep correcting the result until it's right by your criteria. You never need to reroll the result, so you never see the stochastic side of the LLMs.


>Fixed input-to-output mapping is determinism. Prompt instability is not determinism by any definition of this word

It really depends on your perspective.

In the real world, everything runs on physics, so short of invoking quantum indeterminacy, everything is deterministic - especially software, including things like /dev/random and programs with nasty race conditions. That makes the term useless.

The way we use "determinism" in practice depends contextually on how abstracted our view of the system is, how precise our description of our "inputs" can be, and whether a chunked model can predict the output. Many systems, while technically a fixed input/output mapping, exhibit an extreme and chaotic sensitivity to initial conditions. If the relevant features of those initial conditions are also difficult to measure, or cannot be described at our preferred level of abstraction, then actually predicting ("determining") the output is rendered impractical and we call it "non-deterministic". Coin tosses, race conditions, /dev/random - all fit this description.

And arguably so do LLMs. At the "token" level of abstraction, LLMs are indeed deterministic - given context C, you will always get token T. But at the "semantic" level they are chaotic, unstable - a single token changed in the input, perhaps even as minor as an extra space after a period, can entirely change the course of the output. You understand this, of course. You call it "prompt instability" and compare it to human performance. But no one would call humans deterministic either!

That is what people mean when they say LLMs are not deterministic. They are not misusing the word. It just depends on your perspective.


But there is no fixed input-to-output mapping in current popuular LLMs.

You mean "corporate inference infrastructure", not LLMs. The reason for different outputs at t=0 is mostly batching optimization. LLMs themselves are indifferent to that, you can run them in a deterministic manner any time if you don't care about optimal batching and lowest possible inference cost. And even then, e.g. Gemini Flash is deterministic in practice even with batching, although DeepMind doesn't strictly guarantee it.

This is all currently irrelevant, making it work well is a much bigger problem. As soon as there's paying demand for reproducibility, solutions will appear. This is a matter of business need, not a technical issue.


It always feels like I just have to figure out and type the correct magical incantation, and that will finally make LLMs behave deterministically. Like, I have to get the right combination of IMPORTANT, ALWAYS, DON'T DEVIATE, CAREFUL, THOROUGH and suddenly this thing will behave like an actual computer program and not a distracted intern.

Like the brain

Actually at a hardware level floating point operations are not associative. So even with temperature of 0 you’re not mathematically guaranteed the same response. Hence, not deterministic.

You are right that as commonly implemented, the evaluation of an LLM may be non deterministic even when explicit randomization is eliminated, due to various race conditions in a concurrent evaluation.

However, if you evaluate carefully the LLM core function, i.e. in a fixed order, you will obtain perfectly deterministic results (except on some consumer GPUs, where, due to memory overclocking, memory errors are frequent, which causes slightly erroneous results with non-deterministic errors).

So if you want deterministic LLM results, you must audit the programs that you are using and eliminate the causes of non-determinism, and you must use good hardware.

This may require some work, but it can be done, similarly to the work that must be done if you want to deterministically build a software package, instead of obtaining different executable files at each recompilation from the same sources.


Only that one is built to be deterministic and one is built to be probabilistic. Sure, you can technically force determinism but it is going to be very hard. Even just making sure your GPU is indeed doing what it should be doing is going to be hard. Much like debugging a CPU, but again, one is built for determinism and one is built for concurrency.

GPUs are deterministic. It's not that hard to ensure determinism when running the exact same program every time. Floating point isn't magic: execute the same sequence of instructions on the same values and you'll get the same output. The issue is that you're typically not executing the same sequence of instructions every time because it's more efficient run different sequences depending on load.

This is a good overview of why LLMs are nondeterministic in practice: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...


If you want a deterministic LLM, just build 'Plain old software'.

It's not even hard, just slow. You could do that on a single cheap server (compared to a rack full of GPUs). Run a CPU llm inference engine and limit it to a single thread.

Oh how I wish people understood the word "deterministic"

LLMs are deterministic in the sense that a fixed linear regression model is deterministic. Like linear regression, however, they do however encode a statistical model of whatever they're trying to describe -- natural language for LLMs.

they are deterministic, open a dev console and run the same prompt two times w/ temperature = 0

And then the 3rd time it shows up differently leaving you puzzled on why that happened.

The deterministic has a lot of 'terms and conditions' apply depending on how it's executing on the underlying hardware.


So why don’t we all use LLMs with temperature 0? If we separate models (incl. parameters) into two classes, c1: temp=0, c2: temp>0, why is c2 so widely used vs c1? The nondeterminism must be viewed as a feature more than an anti-feature, making your point about temperature irrelevant (and pedantic) in practice.

LLMs are essentially pure functions.

That's not the point...

Decision makers do pay attention to US internal affairs as it affects the rest of the world directly.

It doesn't "pick" anything. It produces the most likely number after this question based on the data it has been trained with! Reasoning models might pick in a sense that they will come up the the rules (like the grand parent post shows), but still it will produce the "most likely" number after the reasoning.


They can't be random, that's not how a stochastic model produces tokens. Unless the models in question are using a tool call for it, the result will very likely carry bias


You just went and created the worst example. The model knows how to create an rng, that's not it weakness. In fact, if you give it a random mcp it won't do that.


Well, yeah! It's a probalistic model, and extremely biased - it has to be, so that it can predict the correct token.


There's no "just" in RL. Fine tuning is very important and could make a lot of difference.


Indeed, this is quite obvious on Claude models vs Gemini. I fully believe Gemini is more powerful model, but the post training process is nowhere near what Anthropic does, which results in Gemini being horrible at coding sessions, while Claude is excellent.


apparently GPT-5 uses the same pretrain as 4o did, hah


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: