Hacker Timesnew | past | comments | ask | show | jobs | submit | comboy's commentslogin

OpenX is becoming a bit like that hindu symbol associated with well being..

Claude getting clawed.

this is exactly i thought!

> a state sponsored threat actor

your CPU, your OS, CPU and firmware on your motherboard chips, ethernet, wifi, HDDs (btw did you know your sim card has JVM?), your browser, all your networking equipment in between, BGP and all the root certs and I'm just scratching the surface

the ballpark is on anther planet


Fascinating how HN is torn about vibe coding still. Everybody pretty much agrees that it works for some use cases, yet there is a flamewar (I mean, cultured, HN-type one) every time. People seem to be more comfortable in a binary mindset.

It’s just how discussion on the internet works, for basically anything at all worth discussing. It’s exhausting, but I can hardly blame HN specifically.

If you enjoy the flamewar, check out /r/SelfHosted which has been losing it's mind over the last few months. The heavy heavy majority of that community is somehow incredibly anti AI despite the fact that the previous "spammy" posts (before ai assisted projects) were all "what is wrong with my docker compose file"??

I had to unsub from that subreddit when I saw a cool new application and the top comments were just dogging it for the signs of Claude Code (claude.md).

This is a subreddit about selfhosting things others built for free. Honestly, often for piracy purposes. It's insane how entitled people have become.


Absolutely. Really gross to see. Heavy majority of the complaints boil down to “I can’t blindly trust everything posted here now?” - as if they could before?? So entitled.

Also annoys me that all of the suggestions on how to handle filtering AI demonstrate a clear lack of understanding around how agentic coding works. Like if you can’t be bothered to understand why “ban any project that uses AI” is not possible, the entire subreddit is probably above your pay grade…


The problem is that every day someone "creates" a "new" ffmpeg GUI or similar. There is already a million "ffmpeg GUIs", many of which existed before the advent of AI.

We don't need a thousand copies of a tool which is practially useless, espically when I could of just prompted an LLM for a command for ffmpeg to convert to randomfile.emk3ukz file or whatever. The spam was getting unreal.


VIM vs Emacs vs IDE vs..., Tabs vs Spaces, Procedural vs OOP vs Functional.

We love a good holy war for sure.

The nuance is lost, and the conversations we should be having never happen (requirements, hiring/skills, developer experience).


> Everybody pretty much agrees that it works for some use cases

That isn't true, which is the exact reason why people have a binary mindset. More than once on Hacker News I've had people accuse me of being an AI booster just because I said I had success with agents and they did not.


For my part at least, I get the most riled up against the binary thinkers!

This. A lot of people on HN acts as you can only write code manually (almost, generators and snippets are allowed, because we are used to them) or vibe coding the whole project through a WhatsApp conversation. As if there was nothing in between and the same approach should work for all kinds of projects.

Personally I use coding agents for boring parts (I really don't enjoy putting the same piece of string to 20 different classes just to register a new component) and they work quite well, I'm going to use them for foreseeable future, because they make coding much more enjoyable for me. On the other hand I don't have an OpenClaw box burning billions of tokens weekly for me, because I usually don't have ideas that could be clearly specified.


> That is extremely stupid. What does that ban get you?

confidence in firing coders I presume..


They are hiring "architects", or do we call them analysts. The impression is we're going back to analysts drawing those pld school UML-like diagrams etc. Also, a lot of the devs are on the brink of just quitting, because it's "not programming" anymore. So, not only will you still need devs, or people massaging those specs, you'll also need enough "product" people to keep that engine fed! If your management isn't lazy, I can see the need for growing people count will continue to rise within such companies. That doesn't mean the work will be ...satisfying for devs.

Perhaps we should start making LLM- open source projects (clearly marked as such). Created by LLMs, open for LLM contributions, with some clearly defined protocols I'd be interesting where it would go. I imagine it could start as a project with a simple instruction file to include in your project to try to find abstractions which can be useful to others as a library and look for specific kind of libraries. Some people want to help others even if they are sharing effectively money+time rather than their skill.

Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.


OpenClaw https://github.com/openclaw/openclaw is effectively that - 1,237 contributors, 19,999 commits and the first commit was only back in November.

Simon, as co-creator of Django, what's your take on this story?

I think this line says everything:

> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.


I love it. Sounds like good advice for submitting a PR to any project!

Why does it matter if the I understand the ticket and solution? THe LLLM writes the code not me. If you want to check the LLM understanding i'll be happy to copy and paste your gatekeeping questions to it.

Hey I thought you were a proponent of "no one needs to look at the code" ? dark factory, etc etc.


Just because I write about the dark factory stuff doesn't mean I'm a "proponent" of it. I think it's interesting and there's a lot we can learn from what they are trying, but I'm not yet convinced it's the right way to produce software.

The linked article makes a very good argument for why pasting the output of your LLM into a Django PR isn't valuable.

The simplest version: if that's all you are doing, why should the maintainers spend time considering your contribution as opposed to prompting the models themselves?


> if that's all you are doing, why should the maintainers spend time considering your contribution as opposed to prompting the models themselves?

Plenty of reasons: - Maybe the maintainers don't have enough credits to run the LLM themselves - Maybe the maintainers don't value fixing the issue which is why it sits in issue tracker - Maybe LLM user has a different model or harness that produces different outcomes - Maybe the LLM user runs the model over and over and gets lucky

Why reject a working solution?


Again, "if that's all you are doing".

You can contribute code that an LLM helped with if you do the extra work to review, verify and explain that code.

Don't put all of that burden on the maintainers who have to review it.


LLM are capable of "review, verify and explain", as much as they are "code".

Please do, that would be amazing.

You'd have to manage the contributions, or get your AI bots to manage them or something, but it would be great to have honeypots like this to attract all the low effort LLM slop.


I like the idea that we could quarantine away LLM contributions like how Twitter quarantines the worst of social media away from Mastodon etc.

Moltbook meets GitHub? Sounds like a billion dollar valuation (sarcasm tag deliberately omitted).

Actually, I'd want to see that. All the AI companies keep saying it will take our jobs, human developers won't be necessary.

Well let them put their money where their mouth is. Let's see what happens, see what the agents create or fail to create. See if we end up with a new OS, kernel all the way up to desktop environment.


Me too, the problem is that it's hard to come up with tools that are needed but not made yet, and we don't want to end up with https://malus.sh/index.html

Eshittification (by Cory Doctorov) is a shitty book but it does explain how that dynamic works.

That's your worldview. Crocker's rules is that you don't have to take receiver feelings into account you just communicate efficently.

No, abiding by Crocker's rules isn't that you don't take into account other people's feelings. It's that other people don't have to take into account your feelings.

Applying them to only one side of the conversation doesn't seem practical.

It perfectly is.

From https://www.lesswrong.com/w/crockers-rules:

> Note that Crocker's Rules does not mean one is authorized to insult people; it means that other people don't have to worry about whether they are insulting you. Crocker's Rules are a discipline, not a privilege. Furthermore, taking advantage of Crocker's Rules does not imply reciprocity. How could it? Crocker's Rules are something you do for yourself, to maximize information received - not something you grit your teeth over and do as a favor.


Alright, my original comment was wrong (as was the parent). I still stand by my opinion that it is not practical though.

Not sure if it's a common knowledge but I've learned not that long ago that you can do "/compact your instructions here", if you just say what you are working on or what to keep explicitly it's much less painful.

In general LLMs for some reason are really bad at designing prompts for themselves. I tested it heavily on some data where there was a clear optimization function and ability to evaluate the results, and I easily beat opus every time with my chaotic full of typos prompts vs its methodological ones when it is writing instructions for itself or for other LLMs.


You can also put guidance for when to compact and with what instructions into Claude.md. The model itself can run /compact, and while I try to remember to use it manually, I find it useful to have “If I ask for a totally different task and the current context won’t be useful, run /compact with a short summary of the new focus”

I ofter wonder if I'm missing something, but shouldn't we be able to edit the context manually???

In that way we could erase prompts and responses that didn't yield anything useful or derailed the model.

Why can't we do that?


Your sim card is an entire computer.

It runs Java!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: