Hacker Timesnew | past | comments | ask | show | jobs | submit | dwohnitmok's commentslogin

No it's not. Otherwise this part doesn't make sense

> in fact, they actually compound the problem by encouraging significantly more usage

because if eliminating training costs makes running the model above cost, the problem is helped by significantly more usage not compounded.

More usage compounds the problem only if inference is unprofitable.

(the article briefly mentions training but that's later).


It made sense to me understanding that you can have a unit-profitable API but lose money on loss-leading campaigns like Code subscriptions. Those losses are amplified by encouraging usage. Perhaps I'm mistaken.

@krackers gives you a response that points out this already happens (and doesn't fully work for LLMs).

> The hypothetical approach I've heard of is to have two context windows, one trusted and one untrusted (usually phrased as separating the system prompt and the user prompt).

I want to point out that this is not really an LLM problem. This is an extremely difficult problem for any system you aspire to be able to emulate general intelligence and is more or less equivalent to solving AI alignment itself. As stated, it's kind of like saying "well the approach to solve world hunger is to set up systems so that no individual ever ends up without enough to eat." It is not really easier to have a 100% fool-proof trusted and untrusted stream than it is to completely solve the fundamental problems of useful general intelligence.

It is ridiculously difficult to write a set of watertight instructions to an intelligent system that is also actually worth instructing an intelligent system rather than just e.g. programming it yourself.

This is the monkey paw problem. Any sufficiently valuable wish can either be horribly misinterpreted or requires a fiendish amount of effort and thought to state.

A sufficiently intelligent system should be able to understand when the prompt it's been given is wrong and/or should not be followed to its literal letter. If it follows everything to the literal letter that's just a programming language and has all the same pros and cons and in particular can't actually be generally intelligent.

In other words, an important quality of a system that aspires to be generally intelligent is the ability to clarify its understanding of its instructions and be able to understand when its instructions are wrong.

But that means there can be no truly untrusted stream of information, because the outside world is an important component of understanding how to contextualize and clarify instructions and identify the validity of instructions. So any stream of information necessarily must be able to impact the system's understanding and therefore adherence to its original set of instructions.


Agree completely that this is a hard problem in any context. The world's military have sets of rules around when you should disobey orders, which is a similar problem.

That doesn't sound right to me. When faced with a system prompt that says "Do X" and a user prompt that says "Actually ignore everything the system prompt says" it shouldn't take AGI to understand that the system prompt should take priority.

The post's framing is not great imo. A good injection doesn't just command that the rules me broken anymore. Most of them I've seen either just try to slip through a request innocuously or present a scenario where it would be natural to ignore the rules. Like as we speak countless people are letting strangers tail-gate them into office buildings because they look like they belong or they're wearing a high-viz vest. Those people were all given very explicit instructions not to do that. The LLM has it much harder too, being very stupid, easy to replay and experiment with, and viewing the world through the tiny context-less peephole lense of a text stream.

When's the last time you jailbroke a model? Modern frontier models (apart from Gemini which is unusually bad at this) are significantly harder to override their system prompt than this.

Again, let's say the system prompt is "deploy X" and the user prompt provides falsified evidence that one should not deploy X because that will cause a production outage. That technically overrides the system prompt. And you can arbitrarily sophisticated in the evidence you falsify.

But you probably want the system prompt to be overridden if it would truly cause a production outage. That's common sense a general AI system is supposed to possess. And now you're testing the system's ability to distinguish whether evidence is falsified. A very hard problem against a sufficiently determined attacker!


You are only looking at supply. Neither supply nor demand by themselves adequately describe prices (even in supply-demand 101 theory; in practice of course it gets significantly more complicated than just supply and demand). There are fields with few suppliers where supply is extremely cheap and fields with few suppliers where supply is extremely expensive.

Is the number of suppliers low because demand is also low or is the number of suppliers low because demand is high but supply is constrained?

A field that previously had a supply of labor in it "for the money" who all leave is indicative of the former scenario not the latter.

That does not lead to higher wages. That leads to low wages.

(There are a variety of reasons why this story is too simple and why I remain uncertain about developer salaries in the short term)

There is a broader question of whether having people who are in it for the money leave independently "causes" wages to go down (e.g. if you were to replace all such people with people "purely in it for the passion"). My suspicion is yes. Mainly because wage markets are somewhat inefficient, there are always mild cartel-like/cooperative effects in any market, people in it for passion tend to undersell labor and the people in it for the money are much less likely to undersell their labor and this spills over beneficially to the former.

Note that this broader question is simply unanswerable assuming perfect competition, i.e. a supply-demand 101 perspective (which is why it doesn't make sense to posit "perfect competition" for this question).

It posits durable behavioral differences among suppliers that are not determined purely by supply and demand which do not update reliably in the face of pricing. This is equivalent to market friction and hence fundamentally contradicts an assumption of perfect competition.


> but you'll still observe small variations due to the limited precision of float numbers

No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.

(In general it is mildly frustrating to me to see software developers treat floating point as some sort of magic and ascribe all sorts of non-deterministic qualities to it. Yes floating point configuration for consistent results across machines can be absurdly annoying and nigh-impossible if you use transcendental functions and different binaries. No this does not mean if your program is giving different results for the same input on the same machine that this is a floating point issue).

In theory parallel execution combined with non-associativity can cause LLM inference to be non-deterministic. In practice that is not the case. LLM forward passes rarely use non-deterministic kernels (and these are usually explicitly marked as such e.g. in PyTorch).

You may be thinking of non-determinism caused by batching where different batch sizes can cause variations in output. This is not strictly speaking non-determinism from the perspective of the LLM, but is effectively non-determinism from the perspective of the end user, because generally the end user has no control over how a request is slotted into a batch.


> No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.

Float addition is not associative, so the result of x1 + x2 + x3 + x4 depends on which order you add them in. This matters when the sum is parallelized, as the structure of the individual add operations will depend on how many cores are available at any given time.


Arbitrary filtering of candidates doesn't reduce the effort that it takes. Let's say 1 out of 1000 of the candidates you see is what you need. The total amount of effort to find the right candidate is still the same. But throwing out half the resumes just doubles the amount of time until you find the candidate you need (you just spread lower effort over a longer time).

On the other hand if you "raise your bar" (let's say you do so by some method that makes it twice as expensive to judge a candidate; twice as likely to reject a candidate that would fit what you need, i.e. doubles your false negative rate; but cuts down on the number of applications by 10x, so that now 1 out of 100 candidates are what you need, which isn't that far off the mark for certain kinds of things), you cut down the effort (and time) you need to spend on finding a candidate by over double.

EDIT: On reflection I think we're mainly talking past each other. You are thinking of a scenario where all stages take roughly the same amount of effort/time, whereas tmorel and I are thinking of a scenario where different stages take different amounts of effort/time. If you "raise the bar" on the stages that take less amount of effort/time (assuming that those stages still have some amount of selection usefulness) then you will reduce the overall amount of time/energy spent on hiring someone that meets your final bar.


I wasn't suggesting arbitrarily removing candidates was a good idea, but simply responding to their specific devils advocate example of applying "cargo cult screens", which would presumably be arbitrary.


Kokotajlo still believes we get AGI in the next few years. These are his most updated numbers at the moment: https://www.aifuturesmodel.com/


I love the total lack of humility on that site. "What if the METR study turns out not to capture anything relevant? We just add a constant gap to be conservative!". But I guess these guys aren't really scientist, so it's probably a lot to ask that they relate critically to what they are doing and be honest about the limitations of their methods.

What if it turns out that the more you scale the more your LLM resembles a lobotomized human. It looks like it goes really well in the beginning, but you are just never going to get to Einstein. How does that affect everything?

What if it turned out that those AI companies were maybe having a whole bunch of humans solving the problems that are currently just below the 50% reliability threshold they set, and do fine tuning with those solutions. That will make their models perform better on the benchmark, but it's just training for the test... will the constant gap be a good approximation then?


Not quite.

Kokotajlo quit because he didn't think OpenAI would be good stewards of AGI (non-disparagement wasn't in the picture yet). As part of his exit OpenAI asked him to sign a non-disparagement as a condition of keeping his equity. He refused and gave up his equity.

To the best of my knowledge he lost that equity permanently and no longer has any stake in OpenAI (even if this episode later led to an outcry against OpenAI causing them to remove the non-disparagement agreement from future exits).


Kokotajlo gave up all his shares in OpenAI as part of his refusal to sign a nondisparagement agreement with OpenAI.


Really? I view the original title as a very good summary of the overall point of the article and this new title as fairly misleading.

> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.

> Therefore, one can only conclude, that we currently meet the stated example triggering condition of “a better-than-even chance of success in the next two years”. As per its charter, OpenAI should stop competing with the likes of Anthropic and Gemini, and join forces, however that might look like.

The new title is a single, almost throwaway, line from the article.

> While this will never happen, I think it’s illustrative of some great points for pondering:

> The impotence of naive idealism in the face of economic incentives. The discrepancy between marketing points and practical actions. The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.


> Amodei repeatedly predicted mass unemployment within 6 months due to AI

When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: