Hacker Timesnew | past | comments | ask | show | jobs | submit | tptacek's commentslogin

Is there an MBV set in this archive? I see the GYBE sets.

later

Never mind, there they are. I was at one of those 2018 shows!


I read this comment as saying that you (100-k)% do not support violence against Sam Altman, for some positive real number k.

This is obviously true, but you're just inviting the rebuttals. Arguments that civil violence is unproductive are boring and obvious. Normal people have been acculturated to understand the point already. The only way to have an "interesting" conversation about this is to take the other side.

All of those arguments will be vile, as they have to be given the context.

I'm not criticizing you, and I guess I'm glad someone wrote this comment quickly. You're right. But I would caution people against reading too much into the countervailing sentiment here. It's not trolling, but it is something adjacent to it.


In high school the 90s, I learned about what the founding fathers said about violence. But, I guess that's too 18th century now.

Except they only won because UK was too busy spending money on a way to stop the French.

Like 1812 when the Brits weren't busy with the French they easily came in and burnt the US capital as punishment for burning the Canadian one. It's not that the British army suddenly got a lot stronger; they just weren't busy fighting on two continents.

That said, civil disobedience is largely pointless. We're in a capitalistic society so money is the name of the game. Rosa Parks did shit-all; it was the boycott of the bus system for 9 months that made the buses cave.


I meant more that we wouldn't have the Bill of Rights if it wasn't for Patrick Henry.

There is a super interesting and complicated discussion to have about the pragmatics and morality of concerted military action versus stochastic civil violence. Unfortunately, thread conditions on HN aren't conducive to it; the discussion will instantly devolve (via people joining in) to valence arguments about the cause of this or that campaign of violence. I genuinely think you'd need a moderation regime designed from the ground up to support a productive conversation about this topic, which, for good reasons, HN doesn't provide.

Honestly, it's not really that complicated. Americans (at least Pennsylvanians) born before, say 2000 were explicitly taught that violence is ok if it's against tyranny. Apparently, they stopped teaching that after 2010, so we're now in a post-natural-rights era.

I went to high school in Pennsylvania.


We went to different high schools in the 1990s, because that isn't at all what I was taught.

While I typically avoid touching non-technical topics, I have the opportunity to chime in as another PA highschooler from the 90's, we absolutely were taught that, down to details in AP courses such as the impact of individuals like John Brown. While I'm not sure I'd have worded it precisely like the parent, the concept of "the four boxes of liberty" and the progression thereof was certainly understood and conveyed. (There was substantial study of the labor rights movements and conflicts/resistance therein as well)

I went to Jesuit high school in Chicago in the early 1990s. There's a lot more to say about all of this stuff and nothing wrong with what you just said, but to hash it out any further, we'd have to attempt a philosophical discussion about violence in a forum that (unavoidably, and to the consternation of its moderators) has reward circuits wired around hyping up action.

“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants” has been a popular quote in the US for a long time.

You've basically just said anyone who doesn't hold the "approved" opinion is wrong and then you called them names. But you wrapped it in extra words so that it's less flagrant.

Did you ever think that maybe people do in fact believe what they say they believe?


Everybody who believes civil violence is a productive solution to any problems we have in 2026 is wrong. I don't see myself as having called anyone names; rather, I said that the point was so banal that the only conversation you're likely to see is from people who get dopamine hits from taking the edgy other side of the argument.

>Everybody who believes civil violence is a productive solution to any problems we have in 2026 is wrong.

Hilarious joke, Mr. Fukuyama. You have masked goons running around, detaining and even killing people without probable cause. If the results of the 2026 midterms are not to the liking of the current POTUS, it isn't unthinkable that he would try to overturn them, even by force. Would you be hand-wringing on HN about how violence is always bad, then?

But I digress. Firebombing Sam Altman is very bad; there is a multitude of good points against it, from the moral to the pragmatic. "Violence is fundamentally evil" is just a lazy and evidently false argument that does you a disservice.


You said they were "abnormal" and "trolls" but you dressed it up in the sort of snooty language that HN expects you to dress it up in.

Civil violence is the backstop of literally every societal system. While it would be better if the systems work, civil violence is what happens if they don't and tends to increase until they do.


Our premises are too far apart for it to be productive to discuss this.

[flagged]


I'm walking away because there's nothing more to be said. The idea that there has to be a last word in all these threads that satisfies everybody, including random people who weren't even participating, is part of what makes these threads so awful. I'm not going to keep a slapfight going just to entertain you. Deal with it.

I have never once seen someone on HN express happiness that someone was killed in a drive-by gang shooting.

I saw this all the time when ICE was doing their business in Minneapolis. That was only a few months ago and it doesn't take too long to dig and find some truly odious posts.

Well for one, nobody was killed here. But second of all, sure -- because Hacker News are not the class of people involved in drive-by gang shootings; to most of us they are essentially abstractions, barely more real than the trolley problem. If you went around asking people who knew a guy that was shot, you'd eventually find someone who said he had it coming -- he got involved with the wrong guys, he shot at one of them first, he did something he shouldn't have (a common thread: the livelihood of the people involved). This is obviously atrocious: nobody should go around shooting people on the streets. But we can recognize that both are playing with fire, and understand the violence in that context -- such that the solution to gang violence is not, "moralize at the gang members until they stop shooting eachother", but rather "improve socio-economic conditions until they stop wanting to". So yes, there are elements of HN's population that will cheer these events on. But this should not be surprising -- the ruling class is playing with fire.

[flagged]


I think the point was that people are willing to be happy about this happening to tech CEOs but would not express the same about a gang shooting.

This is a brochure site from "The Alliance for Secure AI", which I am unfamiliar with, but whose site gives "AGI weirdo". Am I misreading?

https://secureainow.org/


I don't think so..nothing about these folks backgrounds screams "understands LLMs" https://secureainow.org/staff/. Which to be clear doesn't mean they can't effectively pull together publicly available layoff data in a website.

No, they didn't. They distinguished it, when presented with it. Wildly different problem.

Yeah. And it is totally depressing that this article got voted to the top of the front page. It means people aren’t capable of this most basic reasoning so they jumped on the “aha! so the mythos announcement was just marketing!!”

Yeah. Extremely disappointing.

There isn't one (much as I might think there should be). Threads about Mangione were also uncivil and activating.

HN isn't a "science and technology" site.

You're being nice about it but I think you're inadvertently expressing literally the sentiment Dan was referring to.

I am not speaking for the parent, but my personal interpretation is that they are trying to add perspectives/thoughts, not denying what Dan said (i.e. it's not "inadvertent" in as few words).

By that I meant it didn't read like they were trying to push back on him.

On the contrary, not justifying nor condoning anything of the sort.

The main point I was trying to make was in highlighting the perceptual and emotional disconnect between knowing and working with someone personally, versus those who haven't (myself included).

Most people's perception of Sam was shaped in recent years, by press coverage that tends to treat him as the face of AI, with sentiment that usually goes something like: "hey, this guy's stealing all your water so he can take your job too, and by the way he lies a lot."

A couple follow-on points there were:

a) Dan shouldn't take it personally for not being able to control a tidal wave of negative sentiment stemming from that dynamic playing out.

b) I don't think it does anyone any good to dismiss the negative sentiment driving that as mere mob mentality. Even Sam appears to understand this quite well, in the very blog post the submission links to.

To echo another comment[0]:

>... while the vast majority of us think "holy crap, that's horrible" but aren't adding it because of course that's already been said and there just isn't any more nuance needed.

I agree; explicit condemnation just felt performative and hollow.

For what it's worth, I'm actually rooting for Sam assuming his words ultimately line up with his actions, and my opinion of him is neutral or slightly positive. I don't think it's widely appreciated just how crazy a position the guy is in; there's no way he can make everybody happy.

To touch on the hollow part: this is someone pg once described in so few words as more than capable of handling himself. [1]

I recall reading that years ago he insisted offices be swept for bugs after a visit by Musk, and he hangs out with similarly powerful people.

In other words, you don't operate in that world without your security already being excellent, and it's probably going to get even better now. Give it a couple years and he'll probably have a humanoid robot perimeter that'll smoke anyone on sight with a level of efficiency that is comical.

So, in that context taking a thoughts and prayers tone felt a little unnnecessary.

[0] https://qht.co/item?id=47732594

[1] https://qht.co/item?id=7280124


It shouldn't matter how many lies a guy tells, or how he runs his business. People shouldn't throw molotov cocktails at his house, and people shouldn't act like his behavior is potentially justification for people throwing molotov cocktails at his house.

Anybody whose perception of Sam Altman was "he deserves for me to throw a molotov cocktail at his house" is a horrible person. I don't care if Paul Graham says he's a tough guy.

Explicit condemnation is only hollow if you don't mean it.


To be clear I'm not saying any of it is justified and generally agree with everything you wrote. The fact that happened to Sam and his family is indeed horrible.

That said, please don't twist my words. I think there's utility in understanding why people feel and act the way they do.

Otherwise, everybody just takes the de facto stance of "those people are intrinsically bad people, and not good people like us!" which is pretty useless and typically just leads to more escalation.

You could also spare me the one-line zinger at the end.


I didn't mean it as a zinger; I meant it as a rebuttal of the line from your comment. If you felt zinged by it, maybe it's worth considering why.

You keep writing comments where you try to wiggle between it being really important to think about the context in which people commit crimes and the context in which people are OK with crimes being committed based on not liking the victim, but also you keep clarifying that you don't condone what they're doing or saying.

What is your actual point? The best I can try to pluck out, the summation of the above is that the people throwing molotov cocktails, and the people saying it's justified, are bad people but they're bad for understandable reasons?


>I didn't mean it as a zinger; I meant it as a rebuttal of the line from your comment.

Fair enough.

>If you felt zinged by it, maybe it's worth considering why.

Conditioned response from years of defending comments against immediate pedantry, of which I'm probably guilty of myself. Not saying that you were being pedantic.

>What is your actual point?

Originally dang seemed pretty burnt out from moderating this thread, so I just wanted to pitch in with my two cents saying that he's dealing with a tidal wave of larger negative public sentiment that's perhaps beyond his control.

I think there's an important distinction to be had between whoever threw the cocktail (fuck them), and the folks expressing what I termed callous indifference.

People are allowed to not give a shit and say as much, and while that might be bannable I don't think it's particularly productive to take that route.

Moreover, I thought it was important to note that some people here (like dang presumably) actually know Sam personally, so it might not be appreciated that it comes off as extra ghoulish to them when they're reading said callous comments.

At the same time, if your only source of information about the guy is recent press, it's easy to understand how someone arrives at that position; anti-AI sentiment is gaining popularity rapidly.

That's it. That's my point or stance if you will, I don't think it's that unreasonable; just trying to highlight what I see as a disconnect.


This is the waffling again. You made the pitch earlier that explicit condemnation felt hollow. Your comments here (and the many from other people saying similar things) are what look hollow to me.

When you say things like "it's easy to understand how someone arrives at that position", you're laying the groundwork to justify why what you class as "callous indifference" is just a logical and natural state that we should accept.

We shouldn't. The people who are celebrating or ok with molotov cocktails being thrown are also bad people. To borrow your language: fuck them, too.


>When you say things like "it's easy to understand how someone arrives at that position", you're laying the groundwork to justify why what you class as "callous indifference" is just a logical and natural state that we should accept.

I didn't say it should be accepted nor was I laying groundwork for justification, be it implicit or explicit.

Rather, only stating that such indifference does logically follow in those circumstances.

Quoting my prior comment:

>>Most people's perception of Sam was shaped in recent years, by press coverage that tends to treat him as the face of AI, with sentiment that usually goes something like: "hey, this guy's stealing all your water so he can take your job too, and by the way he lies a lot."

People's reaction here isn't exactly shocking when taken in that context.

>To borrow your language: fuck them, too.

Yeah, agreed.


> Rather, only stating that such indifference does logically follow in those circumstances.

This is exactly what I’m talking about.


>>Rather, only stating that such indifference does logically follow in those circumstances.

>This is exactly what I’m talking about.

In other words: There's a lot of people angry about AI right now, and it isn't much of a surprise that indifference and insensitivity follows.


There were a lot of people angry about secret pedophilia rings run out of the basements of pizza parlors, and violence unspooled from that too.

This feels like a pointless semantic trap. Everything is "waffling" or "wiggling". I don't see the parent saying anything in a disguised manner. It's just that reality is complicated. In the immediate wake of violence, it's exceedingly easy to paint any sentiment aside from "this is horrible" as disrespectful or weasel-worded. That's cheap (as I mentioned elsewhere, it's like the way conservatives refuse to talk about guns in the wake of gun violence).

I disagree with almost all of this but I'm not here to single you out.

Appreciated, but I would hope that it at least changes your initial read.

If you cut out the vulnerable code from Heartbleed and just put it in front of a C programmer, they will immediately flag it. It's obvious. But it took Neel Mehta to discover it. What's difficult about finding vulnerabilities isn't properly identifying whether code is mishandling buffers or holding references after freeing something; it's spotting that in the context of a large, complex program, and working out how attacker-controlled data hits that code.

It's weird that Aisle wrote this.


> It's weird that Aisle wrote this.

No, writing an advertisement is not weird. What's weird is that it's top of HN. Or really, no, this isn't weird either if you think about it -- people lookin for a gotcha "Oh see, that new model really isn't that good/it's surely hitting a wall/plateau any day now" upvoted it.


Nah, Saturday post. Less news less content.

It's not weird. Top of HN is worthless as a barometer at this point, people downvote for calling out AI slop.

Can you downvote submissions?

It's weird, because when working on a big project, taking a break for a week or two, and returning to it, I will find a bug and will see hundreds of lines of code that are absolutely terrible, and I will tell myself "Tom you know better than to do this, this is a rookie mistake".

I think people forget that it's hard to be clever and tidy 100% of the time. Big programs take a lot of discipline and an understanding of the context that can be really hard to maintain. This is one of several reasons that my second draft or third draft of code is almost always considerably better than the first draft.


> I think people forget that it's hard to be clever and tidy 100% of the time

People on the outside with imposter syndrome also need to remember this.

Any mature codebase is a bit messy.


It's also that humans are very bad at repetitive detailed tasks. Sitting down with a code base and looking at each function for integer overflow comparison bugs gets boring really fast. It's a rare person who can do that for as long as it takes to find a bug that they don't already have some clues about.

It's the flaw in the "given enough eyeballs, all bugs are shallow" argument. Because eyeballs grow tired of looking at endless lines of code.

Machines on the other hand are excellent at this. They don't get bored, they just keep doing what they are told to do with no drop-off in attention or focus.


idk man, pay me enough money and I’ll look at as much code as you want looking for integer overflows

Would it be cheaper than Claude Mythos doing it? No idea. Maybe, maybe not.

But it’s weird how we’re willing to throw away money to a megacorp to do it with “automation” for potentially just as much if not more as it would cost to just have big bounty program or hiring someone for nearly the same cost and doing it “normally”.

It would really have to be substantially less cost for me to even consider doing it with a bot.


> idk man, pay me enough money and I’ll look at as much code as you want looking for integer overflows

So would I, but it doesn't negate that we, humans, are bad at this. We will get bored and our focus will begin to drift. We might not notice it, we might not want to admit it, but after a few continuous hours we will start missing things.


And there aren't enough security researchers in the world to review ALL the files from OpenBSD.

And if there were, the cost would be more like $20M than 20K.

Having all code reviewed for security, by some level of LLM, should be standard at this point.


If it’s obvious when you look close, then automate looking close. Seems simple to write tools that spider thru a code base, finding logical groupings and feeding them into an LLM with prompts like “there is a vulnerability in this code, find it”.

The thesis is, the tooling is what matters - the tools (what they call the harness) can turn a dumb llm into a smart llm.


Hold on, I misread your comment because I'm knee-jerk about code scanners, which were the bane of my existence for a while. Reworking... and: done. The original comment was just the first graf without the LLM qualification. Sorry about that.

The general approach without LLMs doesn't work. 50 companies have built products to do exactly what you propose here; they're called static application security testing (SAST) tools, or, colloquially, code scanners. In practice, getting every "suspicious" code pattern in a repository pointed out isn't highly valuable, because every codebase is awash in them, and few of them pan out as actual vulnerabilities (because attacker-controlled data never hits them, or because the missing security constraint is enforced somewhere else in the call chain).

Could it work with LLMs? Maybe? But there's a big open question right now about whether hyperspecific prompts make agents more effective at finding vulnerabilities (by sparing context and priming with likely problems) or less effective (by introducing path dependent attractors and also eliminating the likelihood of spotting vulnerabilities not directly in the SAST pattern book).


I have long said that static checkers get ten false positives. note that size of the code is not a consideration, it doesn't matter if it the four line 'hello world' or the 10 million line monster some of us work on, it is ten max false positive.

Right, but they didn't actually test that, did they?

What's weird is that Google, Anthropic and OpenAI are claiming the model is the powerhouse, when what Aisle is stating is very much not the case.

It almost seems like a coordinated effort (Google in January, Anthropic and OAI in April) building out gated models that will eventually be very expensive. Yet, here we are: Aisle is saying that's not required to get there.

I don't think it's weird at all. It seems to me the Frontier providers are just trying to find, still unsuccessfully, a moat to make their unsustainable business model... Well. Sustainable.


I agree that the apocalyptic messaging about mythos is eye-rolling, but the thesis of the article that "the moat is the system, not the model" is weird because the point is that the model is the whole system. A little Bash loop that just tells the model to "look at this file" for every file is clearly not a "moat" of a system

Is it, though? In a way: yes. But look at where the focus of LLMs has gone: agentic frameworks. Yet, we see all of the models continually being compared against benchmarks that can easily be gamed by the model itelf [0].

There's no great way to garner the quality / efficacy of something non-deterministic that you can't trust, at least not currently. And I wouldn't be surprised that the providers haven't known that their LLMs could possibly be cheating for a while now.

On one hand they're saying: these models are so apocalyptic if everyone had them, and then on the other hand showcasing how their models are sweeping the floor on benchmarks. So which is it? Personally I don't believe any of these companies at this point, especially when they make claims that are non-public and wrapped in NDAs that benefit their bottom line.

[0] https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/


While I agree this is true of coding, there are other domains and paradigms in which the loop is more involved than a bash loop.

Realizing this fact explains:

1. why software development is first to get disrupted by AI

2. other domains that are easily loopable like contract review are also quite easy to deploy AI into, so you get all these "AI for Law" running around doing essentially the same thing

3. domains that are not easily loopable are much harder to figure out leading people to believe AI can't be useful, when in fact it's a failure of the application layer


Yea I think if you read the actual design of the test they are presenting as evidence it shows that what these small models are doing is not the same as what Mythos did. They isolated the vulnerable code down to the vulnerable subset of the function and provided hints in the prompt about all of the key contextual factors that matter to finding the vulnerability. That makes the problem significantly easier.

I realize they are trying to prove that an agentic harness running small models can ultimately achieve the same thing as what Mythos did, but they are handwaving away the steps it takes to construct the context Mythos handled in model and using a misleading test result to prove small models can handle the key step.

Poor evidence of a premise that logically wouldn't even be proven if the their evidence was valid. If they could find these types of vulnerabilities with the same effectiveness they would have done it already.


People really lack imagination. The point here is that a dedicated attacker with a good harness and really cheap models can run the attack regardless. It's like portscan/url search attacks. They could run all of these against all codebases and clients. However, on the flip side, this also means we could run cheap models against every PR made, and do a thorough red-team security review.

None of these requires mythos. If anything we just need Opus 4.5+ that is not lobotomised.


That is a point. It might even be true. But showing a small model an example of vulnerable code and asking to confirm that it is vulnerable code isn't evidence for that point!

No, it is evidence for that point. You could just rattle off every possible vulnerability and have the cheap model scan for it in the harness through a loop.

Note that I say cheap, not small, because small models may lack the reasoning needed, but some models are cheap enough but retain enough reasoning (ala Sonnet 3.7+)


They could write a post demonstrating that you can do that and surface the same bugs in the same codebases.

It would be way more informative than this one, which didn't do that.


That's not what they did.

It’s like not differentiating between solving and verifying.

“PKI is easy to break if someone gives us the prime factors to start with!”


>If you cut out the vulnerable code from Heartbleed and just put it in front of a C programmer, they will immediately flag it. It's obvious.

Genuinely curious - why couldn't a static analyzer also find the issue then? Those have been worked on for 50+ years at this point, maybe longer.


So it follows that the most efficient time to discover bugs is when you first write them.

... or maybe when you see them triggered or exploited reproducibly, then the underlying bug will also be pretty easy to discover. But at that point, it's already too late. :)

I really like your original point, I never thought about it this way.


Off-topic but is there an effort to test AI models against code versions with major historic bugs (Heartbleed, GHOST, log4j, etc)? Seems like the kind of thing that would be relevant in security-related AI benchmarks.

The point of contention is whether Mythos is the product of its intelligence or its harness; the results like this, and other similar testimonies, call into question too-dangerous-to-release marketing, and for good reason, too. Because it is powerful marketing. Aisle merely says the intelligence is there in the small models. I say, it's already clear that competent defenders could viably mimic, or perhaps even eclipse what Mythos does, by (a) making better harness, (b) simply spending more on batch jobs, bootstrapping, cache better, etc. You may not be doing this yourself, but your probably should.

Aisle and Anthropic are literally talking about two different problem spaces.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: