If you have to really understand software development to be a good user of AI, we’re screwed. All the best users of AI we’ll ever have already exist I think.
That's a good point.
Im a novice self taught developer that somehow pushed through and made a decent PM tool for the construction industry. It works, if your users aren't malicious or too demanding.
Now I'm working on a second project, all with AI. I haven't written a single line. It works better than a non programmer would make because I knew what to ask for. But I'll admit I'm not learning anything.
Can't say the same. I've been super hands on with a C project. Really getting into the details of the event bus and how to make things performant. The AI is still writing 99% of the code but I'm being super strict about what I consider acceptable.
You might wanna learn how to set up constraints for your agents. Every memory map is accounted for with docs pointing to the exact structs, allocations, and usage patterns. It's stuff I was already doing. Now I can do it in a fraction of the time.
Doubts? I'm full of them. I'm building a new task manager for myself that has things I've always wanted. It's basically a profiler so it must be performant.
The core if you disable all the plugins is currently topped out at 73.8 MB after several days of running it. I've given it several audits with the AI agents using actual memory maps and doing the math on each variable.
I haven't had time to do Milkdrop yet but it's on my todo. The issue isn't doing the work. The issue is not having enough credits in my accounts to throw some compute at it. I'll get to it eventually. But it's actually way easier now to try new ways of packing the data into binary and profiling it for issues.
The issues I've had are edge cases like a 6 hour youtube stream. At one point the BPM detector was buffering the entire track in the pipewire sink. It took one throwaway prompt to the AI to solve that one.
At work, there is little reason to ask a human a question, you just talk to AI for answers fast. Only time to talk to another human, is if you are barking down orders. This will be analogous to not hiring a human for anything an AI could do, unless you need someone to assume liability.
My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.
I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
> I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.
Edit: I've googled it and I can't find anything relevant. I've been working in software for 20+ years and read a myriad things and it's the first time I hear about it...
"Shift-left" was a general term that occurred in the systems engineering / devops space – I'm not surprised to see it used in a security context now. More or less, about a decade ago most systems engineers were recruited into the industry without any application software engineering skills and that became a drag on organizations trying to scale. It was about moving testing, devops, security, etc into the software engineering role and attempting to consolidate systems engineering into SWE roles. It was a part of the larger "devops movement".
Shift-left was a disaster? A large number of my day to day problems at work could be described as failing to shift-left even in the face of overwhelmingly obvious benefits
Well you're trying to convince them to reject their actual experience. Better tooling and better models have indeed solved a lot of the limitations models faced a couple years ago.
I also believe coding isn't going to disappear, but AI skeptics have been mostly doing a combination of moving the goalposts and straight up denial over the last few years.
I've been trying out AI over the past month (mostly because of management trying to force it down my throat), and have not found it to be terribly conducive to actually helping me on most tasks. It still evidences a lot of the failure modes I was talking about 3 years ago. And yet the entire time, it's the AI boosters who keep trying to say that any skepticism is invalid because it's totally different than how it was three months ago.
I haven't seen a lot of goalpost moving on either side; the closest I've seen is from the most hyperbolic of AI supporters, who are keeping the timeline to supposed AGI or AI superintelligence or whatnot a fairly consistent X months from now (which isn't really goalpost-moving).
Well, to be fair, judging by the shift in the general vibes of the average HN comment over the past 3 years, better use of agents and advanced models DID solve the previous temporary setbacks. The techno-optimists were right, and the nay-sayers wrong.
Over the course of about 2 years, the general consensus has shifted from "it's a fun curiosity" to "it's just better stackoverflow" to "some people say it's good" to "well it can do some of my job, but not most of it". I think for a lot of people, it has already crossed into "it can do most of my job, but not all of it" territory.
So unless we have finally reached the mythical plateau, if you just go by the trend, in about a year most people will be in the "it can do most of my job but not all" territory, and a year or two after that most people will be facing a tool that can do anything they can do. And perhaps if you factor in optimisation strategies like the Karpathy loop, a tool that can do everything but better.
LLM agents are glorified autocomplete with a thesaurus bolted on, so the victory laps look pretty prematue.
Try one on a mildly ugly multi-step task in a repo with stale deps, weird config, and a DB/API boundary, and you'll watch it bluff past missing context, mutate the wrong file, and paper over the gap with confident nonsense instead of doing the boring work a decent engieneer would do. PR people can call that 'better Stack Overflow' if they want.
Your definition of a glorified autocomplete is … oof. So in short, “try ask it to do something you’d hate on bad code you’d yourself fail at and it might fail”.
And I’m pretty sure I could try Claude on a repo as you describe and it wouldn’t in fact fail. You’re letting your opinions of what LLMs were like a few months ago influence what you think of them now.
Comments like yours really annoy me because they are ridiculously confident about AI being “glorified autocomplete”, but also clearly not informed about the capabilities. I don’t get how some people can be on HN and not actually … try these things, be curious about them, try them on hard problems.
I’m a good engineer. I’ve coded for 24 years at this point. Yesterday in 45 minutes I built a feature that would have taken me three months without AI. The speed gains are obscene and because of this, we can build things we would never have even started before. Software is accelerating.
As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.
I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.
Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"
I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.
Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.
Some specific thoughts for you:
1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.
2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.
3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?
4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.
Hah, thanks but unfortunately I quit and started a business a couple of years ago, in no small part because I didn't want to spend my time maneuvering to kill stupid ideas.
Very well said. So many engineers balk at "coming off as positive" as a form of lying or as a pointless social ritual, but it's the only thing that gets you a seat at the table. Engineers who say "no" or "that's stupid" are never seen as leaders by management, even if they're right. The approach you laid out here is how you have _real_ impact as an engineering leader, because you keep getting a seat at the table to steer what actually happens.
Show them this[1], and if it doesn't sober them up with its absurdity, at least they'll be occupied with something other than treating LinkedIn fluffers as prophets and trying to gaslight you into tanking production
To an extent, these people have found their religion, and rational discussion does not come into play. As with previous tech Holy Wars over operating systems, editors, and programming languages, their self-image is tied to the technology.
Where the tech argument doesn't apply to upper management, business practices, the need to "not be left behind" and leap at anything that promises reducing headcount without reducing revenue, money talks. As long as it's possible to slop something together, charge for it, and profit, slop will win.
But how do you make the case for thoughtful less bloated software to people who just value writing less code themselves, even if the output produces more lines of code? Seems to me like people don’t care about LOC, they care about how much effort they have to spend writing the lines.
Even if you are raised from the dead, it means you just go back to work at some point, where you prompt an AI Agent all day, collect a paycheck, pay bills, and occasionally do some dopamine stimulating activities, until you die again?
This tech will only be used on people who are considered too important to die: demagogues and dictators, mass influencers.
I mean... access to adult content at that age is really, really bad. It really messed up my brain. Gore videos, chatting with adults, etc. But I learned many good things, too. It's a double-edged sword.
Seeing people squish at a young age - and I am not being flippant here - helped reduce my teen "I'm immortal! I'm unstoppable!" phase.
I saw very quickly that what separates a live person from a very deceased flat person was a moment of sillyness/forgetfullness/stupidity. "I didn't SUSPECT that is even possible to happen to a person!" - "We're....fragile?!" - "Ah, bike helmet... I think they're REALLY GOOD idea...."
PSA's just aren't listened to by teenagers. But something that's real - that happened, with the security camera timestamp in the corner... kids learn safety.
> helped reduce my teen "I'm immortal! I'm unstoppable!" phase.
I mean, is that good?
Isn’t another way of looking at that to say that it poisoned an innocent time and left you aware and afraid of death when you might otherwise have been enjoying the end of your childhood without that burden?
In general parents might want their kids to be a little more mindful, but not grow up too soon.
I don't see how this "child protection" enforcement would help in case of small obscure websites with porn and gore? No way their admins gonna comply. I doubt ISPs would go that far to DNS whitelist compliant websites only.
The admins of sites like that DGAF about anything or anyone. They enjoy the chaos and shock.
If you expect admins of edgelord websites to respect the laws of different countries or even care about kids, I suggest checking out 4Chan’s response to various attempts to regulate them.
I never said this would help... in fact, I’m against this kind of measure, at least the way it’s being done. But I wouldn’t be surprised if Brazilian ISPs are forced to block this sort of thing (just look at what happened with Twitter (X) the year before last).
For me, it didn't mess up my brain at all, it showed me a much broader range of what humanity really is, which is exactly what I wanted to understand at that time. I understood the depravity humans will exact upon others, or those they see as lesser (such as the treatment of animals, or prisoners, "the enemy" whoever/whatever that may be). I also saw unfiltered sharing of valuable knowledge, science, tech stuff, software, games, music, culture...
The uncensored internet taught me more than I could ever have been taught in school, and I'll be forever grateful for that. It didn't take me long to understand that I could generally hate no ethnicity or people or country, and the people who do are manipulated by their government or other powerful figures in their life (or disproportionately swayed by experiences in their life). Humans are pretty much all the same, we all have far far more in common than we do differences. I have a stronger perspective of this than my immediate ancestors (demonstrated over and over throughout my life) and I do credit my exposure to the open internet for a huge amount of that.
There is one huge and problematic difference now, though: the uncensored internet of the 90's is nothing like the disinformation-saturated internet of today.
reply