I love driving a manual transmission. But I also understood why it was so hard for me to find a new Jeep Wrangler with a manual transmission a few years ago.
The automatic transmission gives us more dexterity for... what exactly? Fiddling with the dash, reaching for something in the back seat, texting? The best case human has much more control but the average case seems worse off.
I'd say most automatics give you less direct control over the engine. I always feel like I'm having to tease a gear shift out of the car when I'm driving an automatic. Until very recently, the typical car couldn't see the traffic light changing or the hills ahead so it couldn't possibly change gears as effectively as a competent driver.
I think of themselves as very practical - I drive a manual, I fix my own cars, I do my own house projects, I cook my own meals.
Which is part of the reason these anti-AI screeds fall on deaf ears for me. My generation has willingly abandoned all of these legitimately useful hard-skills But there's also nothing preventing you from picking and choosing what you care about.
I'm not actually against manual coding. I just think people need to be honest about about why it's valuable.
I don't work on my own car because I believe that everyone should fix their own cars. But I think enough people should be knowledgeable and have these skills in society - if for no other reason than to keep mechanics and automakers and dealerships honest. I am not personally upset if you work on your own used car or take it to your dealership.
I am against the idea that everyone should somehow be against AI coding.
"I'm not going to use this technology that obviously enhances my productivity because <insert emotional subjective reasoning that no customer would ever care about here>."
I think a lot of people have forgotten why we actually get paid to write code. The person who wants an automated billing system doesn't care if you hand-typed it or not, or if the CSS that would have taken 2 hours to write took 8 seconds via an AI plus 60 seconds of you tweaking a border you didn't like. They just want their billing system. And if you are the person that takes 20x longer to build it, you're going to quickly get outcompeted. Sorry.
Customer doesn’t give a fuck how long a billing system took to make, they only care that it works correctly.
A billing system only truly gets built once, then possibly maintained in perpetuity. This makes the advantage of building it 20x times faster pointless. AI builds it in a day, will it matter 5 years from now if that billing system was instead built by hand in 20 days a long time ago? No.
The speed advantage of AI only comes into play when you have a lot of code to crank out continuously.
Do you have a need to constantly build bespoke billing systems at a rate of 1 per day? Probably not. So who cares. Take your little AI grift charging $1000/month somewhere else. It’s not needed.
Every billing system in use is constantly maintained with new features, bug fixes ane the like. The system of 20 years ago would apply the wrong tax laws today. the people asking for the new feature today care about how easy those are to add
I think adding new features is exactly the sort of place where AI is terrible, at least after you do it for a while. I think it's going to have a tendency to regenerate the whole function(s), but it's not deterministic. Plus, as others have said, the code isn't clean. So you're going to get accretions of messy code, the actual implementation of which will change around each time it gets generated. Anything not clearly specified is apt to get changed, which will probably cause regressions. I had AI write some graphs in D3.js recently, and as I asked for different things, the colors would change, how (if) the font sizes were specified would change, all kinds of minor things. I didn't care, because I modified the output by hand, and it was short. But this is not the sort of behavior I want my code output to have.
I think after a while the accretions are going to get slow, and probably unmaintainable even for AI. And by that time, the code will be completely unreadable. It will probably make the code written by people who probably should not be developers that I have had to clean up look fairly straightforward in comparison.
Skill issue. If you just one-shot everything, sure, you'll get a messy codebase. But if you just manage it like a talented junior dev, review the code, provide feedback, and iterate, you get very clean code. Minus the arguing you get from some OCD moron human who is attached to their weird line length formatting quirk.
The customer cares how much it costs. And how much it costs is proportionate to how much time it takes to build. You’re conveniently ignoring market and price dynamics
They care how much you charge them to build it, and they care how fast you deliver it to them. The idea that they don't makes me question whether or not you have ever been self-employed in your entire life. This kind of thinking makes me think you have always been a drone getting paid by a boss.
The person they are responding with dictated an authoritative framing that isn’t true.
I know people have emotional responses to this, but if you think people aren’t effectively using agents to ship code in lots of domains, including existing legacy code bases, you are incorrect.
Do we know exactly how to do that well, of course not, we still fruitlessly argue about how humans should write software. But there is a growing body of techniques on how to do agent first development, and a lot of those techniques are naturally converging because they work.
The views I see often shared here are typical of those in the trenches of the tech industry: conservative.
I get it; I do. It's rapidly challenging the paradigm that we've setup over the years in a way that it's incredibly jarring, but this is going to be our new reality or you're going to be left behind in MOST industries; highly regulated industries are a different beast.
So; instead of just out-of-hand dismissing this, figure out the best ways to integrate agents into your and your teams'/companies' workstreams. It will accelerate the work and change your role from what it is today to something different; something that takes time and experience to work with.
> I get it; I do. It's rapidly challenging the paradigm that we've setup over the years in a way that it's incredibly jarring,
But it's not the argument. The argument is that these tools provide lower-quality output and checking this output often takes more time than doing this work oneself. It's not that "we're conservative and afraid of changes", heck, you're talking to a crowd that used to celebrate a new JS framework every week!
There is a push to accept lower quality and to treat it as a new normal, and people who appreciate high-quality architecture and code express their concern.
"Find any inconsistencies that should be addressed in this codebase according to DRY and related best practices"
This doesn't hurt to try and will give valuable and detailed feedback much more quickly than even an experienced developer seeing the project for the first time.
These kinds of instructions are the main added value of LLMs and I use them every day. Even though 30%-60% the output is wrong/irrelevant, the rest is helpful enough. After the human reviews it, the overall quality of the codebase increases, not decreases. This is on the opposite end of the spectrum when compared to agentic coding, though.
I've been using LLMs to augment development since early December 2023. I've expanded the scope and complexity of the changes made since then as the models grew. Before beads existed, I used a folder of markdown files for externalized memory.
Just because you were late to the party doesn't mean all of us were.
> Just because you were late to the party doesn't mean all of us were.
It wasn't a party I liked back in 2023. I'm just repeating the same stuff I see said over and over again here, but there has been a step change with Opus 4.5.
You can still it in action now because the other models are still where Opus was at a while ago. I recently needed to make small change to script I was using. It is a tiny (50 line) script written with the help of AI's ages ago, but was subtly wrong in so many ways. It's now become clear neither the AI's (I used several and cross checked) nor myself had a clue about what we were dealing with. The current "seems to work" version was created after much blood caused by misunderstandings was spilt, exposing bugs that had to be fixed.
I asked Claude 4.6 to fix yet another misunderstanding, and the result was a patch changing the minimum number of lines to get the job done. Just reviewing such a surgical modification was far easier than doing it myself.
I gave exactly the same prompt to Gemini. The result was a wholesale rearrangement of the code. Maybe it was good, but the effort to verify that was far lager than just doing it myself. It was a very 2023 experience.
The usual 2023 experience for me was ask an AI write some greenfield code, and get a result that looked like someone had changed variable names in something they found on the web after a brief search for code that looked like it might do a similar job. If you got lucky, it might have found something that was indeed very similar, but in my case that was rare. Asking it to modify code unlike something it had seen before was like asking someone to poke your eyes with a stick.
As I said, some of the organisers of this style of party seem have gotten their act together, so now it is well worth joining their parties. But this is a newish development.
If you hired a person six months ago and in that time they'd produced a ton of useful code for your product, wouldn't you say with authoritative framing that their hiring was a good decision?
It would, but I haven’t seen that. What I’ve seen is a lot of people setting up cool agent workflows which feel very productive, but aren’t producing coherent work.
This may be a result of me using tools poorly, or more likely evaluating merits which matter less than I think. But I don’t think we can see that yet as people just invented these agent workflows and we haven’t seen it yet.
Note that the situation was not that different before LLMs. I’ve seen PMs with all the tickets setup, engineers making PRs with reviews, etc and not making progress on the product. The process can be emulated without substantive work.
If there is one thing I have seen is that there is a subset of intellectual people will still be adverse to learning new tools, hang to ideological beliefs (I feel this though, watching programming as you know it die in a way, kinda makes you not want to follow it) and would prefer to just be lazy and not properly dogfood and learn their new tooling.
I'm seeing amazing result to with agents, when provided an well formed knowledge base and directed through each piece of work like its a sprint. Review and iron out scope requirements, api surface/contract, have agents create multi phase implementation plans and technical specifications in a share dev directory and to make high quality changes logs, document future consideration and any bugs/issues found that can be deferred. Every phase is addressed with a human code review along with gemini who is great at catching drift from spec and bugs in less obvious places.
While I'm sure an enterprise code base could still be an issue and would require even more direction (and opus I wont let touch java, it codes like an enterprise java greybeard who loves to create an interface/factory for everything), I think that's still just a tooling issues.
I'm not of the super pro AI camp, but having followed its development and used it throughout. For the first time I am actual amazed and bothered, and convinced if people dont embrace these tools, they will be left behind. No they dont 10-100x a jr dev, but if someone has proper domain knowledge to direct the agent, performs dual research with it to iron things out with the human actually understanding the problem space, 2-5x seems quite reasonable currently if driven by a capable developer. But this just move the work to review and documentation maintenance/crafting. Which has its own fatigue and is less rewarding for a programmers mind who loves to solve challenges and gets dopamine from it .
But given how man people are adverse...I dont think anyone who embraces it is going to have job security issues and be replaced, but here are many capable engineers who might due to their own reservations. I'm amazed by how many intelligent and capable people try llms/agents like a political straw man, there is no reasoning with them. They say vibe coding sucks (it does for anything more than a small throw away that wont be maintained), yet their examples for agents/llm not working is it can't just take a prompt and produce the best code ever and automatically and manifest the knowledge needed to work on their codebase. You still need to put in effort and learn to actually perform the engineering with the tools, but if it doesnt take a paragraph with no AGENTS.md and turn it into a feature or bug fix they are not good to them. Yeah they will get distracted and fuck up, just like if you throw 9/10 developers in the same situation and told them to get to work with no knowledge of the code base or domain and have their pr in by noon.
Garry is a good person and smearing people over their church is a disgusting thing to do.
I am now going to sit here and listen to this talk because I guarantee it's not saying what you think it's saying. And I don't want to listen to it. It's not a topic that interests me. But I guarantee you are completely distorting what was stated in that topic for maximum effect, entirely motivated by left-wing politics.
I imagine you are not done listening to them yet as the total over 8 hrs. But my research is showing that OPs are largely correct. Theil gave several talks to his church where he did in fact say these things.
In your post you state you are ‘not sure’, but also that that the poster is ‘wrong’.
> My thesis is that in the 17th, 18th century, the antichrist would have been a Dr Strangelove, a scientist who did all this sort of evil crazy science. In the 21st century, the antichrist is a luddite who wants to stop all science. It’s someone like Greta or Eliezer.
Sure, he eventually goes on to say stuff like..
> One of the ways these things always get reported is, I denounce Greta as an antichrist. And I want to be very clear: Greta is, I mean she’s maybe sort of a type or a shadow of an antichrist of a sort that would be tempting. But I don’t want to flatter her too much. So with Greta, you shouldn’t take her as the antichrist for sure. With AOC, you can choose whether or not you want to believe this disclaimer that I just gave
But I don’t think this is the win that you might think it is. The dude is a loon.
You are wrong. Thiel's talks are as insane as we're saying. Also, it's not "disgusting" to tar people for belonging to a known toxic community of lunatics. It's completely rational. Cut the fake outrage. Idiotic religious beliefs don't have the same sacred value to most of us as they do to you.
Going by your comment history any criticism of Thiel and the administration is just left wing politics, but hard to hear you over the sound of drowning yourself with kool-aid.
Weird that you seem to support this administration that Thiel is very much associated with but find it offensive when there's a very clear association between Thiel and Garry. He's just going to this specific church to pray or whatever? Paying no mind to the anti-christ talk happening next door. I do hope this is the last breaths of religion in the western world, it needs to die.
I've been on HN for well over 10 years. I literally volunteered for Obama's 2008 and 2012 campaigns, and my comments in that time period clearly show my politics. I taught free web scraping workshops at the Center for American Progress to journalists back then. None of my policy preferences have changed. What's changed is the frothing at the mouth radicalism and moralizing of the team I used to support.
I'm not religious, and hate religious radicals, but ideologues act identically, just with secular idols. I didn't see that until I watched what the leftist ideologies did to the quality of life in two places I used to live in:
SF and Boulder.
I'm a 2012 Democrat, which makes me a fascist to a 2026 Democrat.
You have made a false claim. What is your evidence that I am religious, let alone a religious radical?
I mean, if we're going to make accusations based on perceived political tribal allegiances, I can say to you with equal certainty that you're a neo-Marxist.
Of course I don't know that. And you don't know anything about me.
Reminder that HN is SV centered and therefore everyone and everything is oriented around tribal group think.
Meta was funded by Thiel, yet most of the people in this thread use their products.
The CCP has technology that dwarfs Palantir, but a ton of people in this thread use TikTok, because they don't recognize fascism unless it's perpetrated by somebody that looks like the Nazis they see in movies.
I grew up around brainwashed religious zealots. I hated it. Everything was this moralistic condemnation and guilt by association game, played by people who had absolutely no sense of perspective and had zero interactions outside of there group think circles. Constantly condemning people they don't know and have never met and don't understand.
I've been on HN for 13 years now. It looks more and more like that every day.
This comment will be down voted without any substantive critique other than "I guess you're a fascist too."
Meanwhile, Discord will not have the slightest tiny drop in user numbers, because nobody outside of this moralistic circle jerk cares.
> Reminder that HN is SV centered and therefore everyone and everything is oriented around tribal group think.
Don't such absolute statements (everyone, everything) remind you of religion as well?
> Meta was funded by Thiel, yet most of the people in this thread use their products.
I imagine it might be as true as:
- most people in this thread also using Discord, despite criticizing it and
- most people using Meta criticize its products.
That is, You can use something and criticize it, and it probably happens both with Discord and e.g. Facebook.
> The CCP[…]
I'm happy to see in the political threads there's very often in the very least a significant presence of critique against China and maybe even overwhelming the defenders of the regime.
> I grew up around brainwashed religious zealots. […] moralistic condemnation […] [HN] looks more and more like that every day.
I think it's good religious zealots don't have the monopoly on moralistic condemnation. Just because A is bad, and B has feature x just like A, doesn't mean the feature x is bad.
> Meanwhile, Discord will not have the slightest tiny drop in user numbers, because nobody outside of this moralistic circle jerk cares.
Discord is not going to delete users, and few people care to request their account to be deleted. If Discord asked me to provide ID, I'd probably at least try to resist by not using it and maybe eventually succumb by providing a fake ID - but as far as I know, Discord will just set my account to a teenager mode, so instead of speaking about a drop in user numbers, we should speak about a drop of activity in adult interactions (or interactions/activity in general) on Discord.
Spot on. I wish this site was never associated with the term hacker because under the thin veneer of people doing cool things with tech, there is today nothing more authoritarian, narrow minded, overconfident and establishment than SV tech culture.
> Constantly condemning people they don't know and have never met and don't understand.
> therefore everyone and everything is oriented around tribal group think
You can be more convincing if you don't group everyone into one bucket and throw insults at it.
A reader can pull your claims out - meta bad, thiel bad, ccp bad, sheeple bad - but there isn't anything substantive there (WHY are these bad; it's all ad hominem so far) and we have to sift through a bunch of insults in order to do it ( 1. Tribal group thinkers. 2. Can't recognize fascism. 3. Looking like religious zealots blindly condemning people we don't know. 4. Going to downvote without thinking or participating.)
Your comment looks a LOT like insult #3 up there, with some whining thrown in on top.
If you want a substantive conversation or debate about the different facets of data privacy then lay the groundwork with some good faith place to start. If you instead just post mini screeds pre-insulting everyone then lamenting that nobody engages then nothing is going to change for you.
I just wish people would remember how awful and unprofessional and lazy most "journalists" are in 2026.
It's a slop job now.
Ars Technica, a supposedly reputable institution, has no editorial review. No checks. Just a lazy slop cannon journalist prompting an LLM to research and write articles for her.
Ask yourself if you think it's much different at other publications.
I work with the journalists at a local (state-wide) public media organization. It's night and day different from what is described at ars. These are people who are paid a third (or less) of what a sales engineer at meta makes. We have editorial review and ban LLMs for any editorial work except maybe alt-text if I can convince them to use it. They're over-worked, underpaid, and doing what very few people here (including me) have the dedication to do. But hey, if people didn't hate journalists they wouldn't be doing their job.
The most active HNers are just extremely negative on AI. I understand the impulse (you spend years honing your craft, and then something comes along and automates major portions of it) but it's driven by emotion and ego-defense and those engaged in it simply don't recognize what's motivating them. Their ego-defense is actually self-fulfilling, because they don't even try to properly learn how to leverage LLMs for coding so they give it a huge task they want it to fail on, don't properly break it into tasks, and then say "i told you it sucks" when it fails to one shot it.
Even this response shows why the most active ones are outwardly negative on AI.
I use AI a ton, but there are just way too many grifters right now, and their favorite refrain is to dismiss any amount of negativity with "oh you're just mad/scared/jealous/etc. it replaces you".
But people who actually build things don't talk like that, grifters do. You ask them what they've built before and after the current LLM takeoff and it's crickets or slop. Like the Inglourious Basterds fingers meme.
There's no way that someone complaining about coding agents not being there yet, can't simultaneously be someone who'd look forward to a day they could just will things into existence because it's not actually about what AI might build for them: it's about "line will go up and I've attached myself to the line like a barnacle, so I must proselytize everyone into joining me in pushing the line ever higher up"
These people have no understanding of what's happening, but they invent one completely divorced from any reality other than the reality them and their ilk have projected into thin air via clout.
It looks like mental illness and hoarding Mac Minis and it's distasteful to people who know better, especially since their nonsense is so overwhelmingly loud and noisy and starts to drown out any actual signal.
Yeah, we wouldn't want someone who understands the most revolutionary technology in 100 years to be the technical advisor to the mayor of the largest city in the United States or anything. That would be silly.
Why do you assume she doesn't understand it? From her Wikipedia article:
"Gelobter enrolled in Brown University in 1987, eventually graduating in 2011 with a Bachelor of Science in Computer Science with a concentration in artificial intelligence and machine learning."
I love driving a manual transmission. But I also understood why it was so hard for me to find a new Jeep Wrangler with a manual transmission a few years ago.
reply