Hacker Timesnew | past | comments | ask | show | jobs | submit | girvo's commentslogin

Yeah these people have no idea what they are talking about, you’re correct.

GLM-5 is surprisingly good to be fair. Punches well above its weight IMO

All the time now… it’s wild how little usage you get with Opus on the Pro sub now haha

Exactly. 3.6 plus in the exact same coding agent harness is notably worse in all of my testing compared to 3.5 plus.

The former gets stuck in ridiculous thought loops on the exact same tasks I’m testing. Fascinating really, I expected more for some reason.


> I also expect that there's a lot of feeling busy while not actually moving much faster.

Hey don’t say that too loudly, you’ll spook people.

With less snark, this is absolutely true for a lot of the use I’m seeing. It’s notably faster if you’re doing greenfield from scratch work though.


> I still want to _code_ not just vibe my way through tickets.

You and I want this. My EMs and HoEs and execs do not. I weep for the future of our industry.


> I'm not sure why more people aren't jumping on it

Simple: most of the people you’re talking to aren’t setting these things up. They’re running off the shelf software and setups and calling it a day. They’re not working with custom harnesses or even tweaking temperature or templates, most of them.


The value prop for the Nvidia one is simple: playing with CUDA with wide enough RAM at okay enough speeds, then running your actual workload on a server someone running the same (not really, lol Blackwell does not mean Blackwell…) architecture.

They’re fine tuning and teaching boxes, not inference boxes. IMO anyway, that’s what mine is for.


They really are. Benchmaxxing is real… but also the Qwen 3.5 series of models are still very impressive. I’m looking forward to trying out Gemma

> then code quality just doesn’t really matter so much in the age of AI

Except at scale it really does, because garbage in garbage out. The crappier the code you feed the current models, the worse and more confusing the broken leaky abstractions, the more bugs the AI will generate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: