Hacker Timesnew | past | comments | ask | show | jobs | submit | SubiculumCode's commentslogin

As an aside: I personally have no use for unicode for bash commands, and the potential for sneaky maliciousness worries me. Does anyone know of a way to automatically strip (e.g. with tr) all unicode away when pasting into a terminal?

That's a fairly common human error as well, btw. Source attribution failures.

Yes. There is a ton of Russian propaganda against the Catholic church claiming the current and last popes to be "anti-popes" and spawns of Satan, and all that, and it is exactly this progression Catholic Church-->Russian Orthodox Church which is under Putin's thumb.

:) Coffee is good

Yes, I can see this as non releasable for national security reasons in the China geopolitical competition. Securing our software against threats while having immense infiltration ability against enemy cyber security targets....not to mention, the ability to implant new, but even more subtle vulnerabilities into open software not generally detectable by current AI to provide covert action.

Anthropic and OpenAI have very different cultures and ethos. Point to other times where anthropic has gone the way of cheap marketing tricks. Now look at openAI. Not even close.

Anthropic has done plenty of cheap marketing tricks as of late, see their recent non-functional C compiler that relied on a harness using gcc's entire test suite

It is functional. You can try it yourself or find third-party tests of it, even. Why do you think that it's a "cheap marketing trick" to test it on the GCC test suites?

Not surprising given that they dont even know why claude-code works as before or doesnt work [1] ie, there is no known theory of operation. Explains why they are afraid of it.

[1] https://qht.co/item?id=47660925


I think Boris will come and say there is no issue with claude code.

It is easier to destroy than it is to protect or fix, as a general rule of the universe. I would not feel so confident about the speed of the testing loop keeping things in check.

It is not scaremongering.

Equating the ability to make weapons as something to be scared about it scaremongering.

That can simultaneously be true, but the best of bad options (if excluding destroying the model altogether). These models may prove quite dangerous. That they did this instead of selling their services to every company at a huge premium says a lot about Antheopic's culture.

Anthropic has behaved the least like this of the AI companies.

They made a claim that 100% of code would be AI generated in a year, over a year ago.

They were right, it's hit 100% at a number of large tech companies. (They missed their initial prediction of 90% 6 months ago, because the models then available publicly weren't capable enough.)

Please tell me those companies so I can find alternatives. I'm using AI every day and there's no way I would trust it do that.

The transition is pretty complete at e.g. Google and Meta, IIUC. Definitely whoever builds the AI tools you're using every day isn't writing code by hand.

I'm literally looking at Claude in the other window telling me that the bug we're working on is a "Clear-cut case", telling me to remove a "raise if this is called on this object" guard from a method, because "the data is frozen at that point" and is effectively proposing a solution that both completely misses the point (we should be calling a different method that's safe) AND potentially mutates the frozen data.

We're 41k tokens in, we have an .md file that describes the situation and Claude has just edited the .md file with a couple of lines describing the purpose of the guards.

I don't understand, are other people working with a different Opus 4.6 than I am?


No, that matches my experience pretty well. Yesterday Claude implemented some functionality I asked for in entirely the wrong component, and then did it again after I clarified. If I'd been coding on my own, the clock time to a complete solution would probably have been lower - but then I would have had to be coding, instead of reviewing other people's PRs.

A careful observer would note from when I'm posting this, of course, that this is perhaps not the only thing I get up to while Claude is busy. But I really do review PRs in a much more timely manner now. (There's people who insist that there's no need to review Claude-generated code, and to be frank I think they're the same people who used to insist that their 2000 line PRs should be reviewed and merged within a day.)


I really just don't believe it. I have not met anyone in tech who writes zero code now. The idea that no one at Google writes any code is such a huge claim it requires extraordinary evidence. Which none ever gets presented.

I'm surprised to hear that. One of us is in a bubble, and I'm genuinely not sure who. I have not met anyone in tech (including multiple people at Google) who does still write code. I've been recreationally interested in AI for a long time, which is a potential source of skew I suppose, but I do not and most people in my circles do not work on anything directly related to AI.

Statistically, knowing multiple people at Google is, IMO, a pretty good sign you're in a bubble. Unless you know a few thousand other software developers.

An entirely fair point that I really ought to keep in mind more often. Thanks for keeping me honest.

Can confirm that basically no one at Google or Meta hand writes code outside extremely extremely niche projects

Anecdotally me and my colleagues haven’t written a substantial line of code since January and this isn’t a mag7; I would be very surprised if mag7 were writing anything by hand unless it’s a custom DSL.

So why aren’t they laying people off and pumping the extra money towards research efforts associated with Llm’s? Lmao.

They should all cut down their labour input right now if what you claim is true.


At many of the best tech companies, the conventional wisdom has always been that there's a huge backlog of stuff to be done. They don't want to deliver 100% of their roadmap with 50% of their employees, they want to deliver 200% of their roadmap with 100% of their employees. (And the speedup is not as high as these numbers imply for many kinds of performance, security, or correctness-critical software.)

Some companies like Block, Oracle, and Atlassian have indeed been laying people off.


Lmao man this is absolute nonsense.

Google has done nothing but destroy value with many of its ‘bets’. Your roadmap stuff is irrelevant - if you don’t have value creating projects in the pipeline and/or labour is augmented you should be laying off - period. Sundar’s job is to maximise the stock price.

So once again - nonsense. Now stop spreading crap that clearly fills people with fear. I can tell you have no understanding of corporate finance and how the management of tech firms actually think these things through.


I'm spreading what people involved in management of tech firms have told me. Perhaps they were lying, but to me it seems consistent with what I observe in the news and in my personal capacity.

I'm also not quite sure your alternate theory is self-consistent. If Google has been frequently destroying value, and companies invariably lay people off when their projects aren't producing value, doesn't that mean they should have already been laying people off?


Have you considered that some companies want to grow instead of laying people off? No one at Anthropic writes code, they manage 20 Claude Code SWEs.

That was a prediction. It was not a claim of their current capabilities. If that is the one you reach for then I feel my point has been made.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: