When you let an LLM author code, it takes ownership of that code (in the engineering sense).
When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?
You have to migration path. Your Codex prompts don't work the same in Claude. All the prompts you developed and saved in commits, all the (probably proprietary) memory the AI vendor saved in their servers to make the AI lock you in even more, all of it is worthless without the vendor.
You are inventing "ah heck, we need to pay the consultant another 300 bucks an hour to take a look at this, because nobody else owns this code", but supercharged.
You're locking yourself in, to a single vendor, to such a degree that they can just hold your code hostage.
Now sure, OpenAI would NEVER do this, because they're all just doing good for humanity. Sure. What if they go out of business? Or discontinue the model that works for you, and the new ones just don't quite respond the same to your company's well established workflows?
I was locked into apple chips, amd chips and intel chips long ago. Everyone is already locked into one of these companies.
The fact of reality is that the technology is so complex only for-profit centralized powers can really create these things. Linux and open source was a fluke and even then open source developers need closed source jobs to pay for their time doing open source.
We are locked in and this is the future. Accept it or deny it one is delusional the other is reality. The world is transforming into vibe coding whether you like it or not. Accept reality.
If you love programming, if you care for the craft. If programming is a form of artistry for you, if programming is your identity and status symbol. Then know that under current trends… all of that is going into the trash. Better rebuild a new identity quick.
A lot of delusional excuse scaffolds people build around themselves to protect their identity is they just say “the hard part of software wasn’t really programming” which is kind of stupid because AI covers the hard part too.. in fact it covers it better then actual coding. Either way this excuse is more viable then “ai is useless slop”
> When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?
They'll hire the person who knows AI, not the human clinging onto claims of artisanal character by character code.
It's entirely possible to engineer well-designed and intentional systems with AI tools and not stochastically "vibe" your way into tech debt.
AI engineers will get hiring preference. That is until we're all replaced by full agentic engineering. And that's coming.
You have to be incredibly incompetent and naiive to look at the absolute garbage theatre that AI outputs today to go "yeah this will write all future code".
Usually the response, for the last years, has been "no no you don't get it, it'll get so much better" and then they make the context window slightly larger and make it run python code to do math.
What will really happen is that you and people like you will let Claude or some other commerical product write code, which it then owns. The second Claude becomes more expensive, you will pay, because all your tooling, your "prompts saved in commits" etc. will not work the same with whatever other AI offer.
You've just reinvented vendor lock in, or "highly paid consultant code", on a whole new level.
I'm currently involved in the hiring process in our company, selecting engineers for my team. If someone applies who has the programming language we ask for in their CV, they get a first interview. If they can read code, and write VERY basic code, they will get through at least the first 2 rounds without any issues.
If people put down the AI, and actually learn how to write a `for` loop, they would be more hire-able than 50% of candidates.
> "Guess it's death [...] for introverts"
There is a meritocracy somewhere in our capitalist system. Not everyone participates, but it exists.
> One thing they all had in common was taking a very targeted approach with their search and leveraging their networks
Right, so they applied to a couple of jobs and it worked for them?
I'm sorry, do you understand how uncommon and rare that is? sure, if their domain was REALLY niche and the jobs weren't publicly advertised, then i could see how that would work. but the experience is VASTLY different outside such niche cases
They applied to a couple of jobs where they were certain the fit would be good, and didn't mindlessly spam their resume to some bot. They got in touch with the right people, and worked it out from there. Because they had done their homework, the path was easier for them.
I had this with Rust. I always saw the huge hype, especially some years ago, and it was hugely off-putting. Ridiculous projects like rewriting famously full coverage branch tested projects like SQLite in Rust, or rewriting the GNU coreutils, and always spamming "blazing fast" and "written in Rust (crab emoji)" was very, very hostile to a C++ developer.
When I eventually got around to using Rust, I was hooked, and now I don't use C++ anymore if I can choose Rust instead. The hype was not completely unjustified, but it was also misplaced, and to this day I disagree with most of those hype projects.
It was no issue to silently pick up Rust, write some code that solves problems, and enjoy it as a very very good language. I don't feel a need to personally contact C or C++ project maintainers and curse at them for not using Rust.
I do the same with AI. I'm not going around screaming at people who dare to write code by hand, going "Claude will replace you", or "I could vibe code this for 10 bucks". I silently write my code, I use AI where I find it brings value, and that's it.
Recognize these tools for what they are: Just tools. They have use-cases, tradeoffs, and a massive community of incompetent idiots who like it ONLY because they don't know better, not because they understand the actual value. And then there's the normal, every day engineers, who use tools because, and ONLY because, they solve a problem.
My advice: Don't be an idiot. It's not the solution for all problems. It can be good without being the solution to a problems. It can be useful without replacing skill. It can add value without replacing you. You don't have to pick a side.
Why doesn't Microsoft just take their incredible, human-replacing, AGI level AI's, and just port all their code to a Linux kernel instead of the NT kernel?
The NT kernel is actually pretty amazing. You can even run a pretty solid Windows version if you want to sail the high seas. LTSC and masgrave will get you most of the way there.
I disagree with that. I can easily tell when my non-native English speaking coworkers use AI to help with their communications. Nine times out of ten, their communication has been improved through the use of AI.
if only there was a difference between native languages aiming at lossy fluency (feels better) and programming languages aiming at deterministic precision.
10 years from now: "The next big thing: HENG - Human Engineers! These make mistakes, but when they do, they can just learn from it and move on and never make it again! It's like magic! Almost as smart as GPT-63.3-Fast-Xtra-Ultra-Google23-v2-Mem-Quantum"
I would love to live in a world where my coworkers learn from their mistakes
is this Human 2.0? I only have 1.0a beta in the office.
I get the joke but it really does highlight how flimsy the argument is for humans. IME humans frequently make simple errors everywhere they don’t learn from and get things right the first time very rarely. Damn. Sounds like LLMs. And those are only getting better. Humans aren’t.
I've always wanted a better way to test programmers' debugging in an interview setting. Like, sometimes just working problems gets at it, but usually just the "can you re-read your own code and spot a mistake" sort of debugging.
Which is not nothing, and I'm not sure how LLMs do on that style; I'd expect them to be able to fake it well enough on common mistakes in common idioms, which might get you pretty far, and fall flat on novel code.
The kind of debugging that makes me feel cool is when I see or am told about a novel failure in a large program, and my mental model of the system is good enough that this immediately "unlocks" a new understanding of a corner case I hadn't previously considered. "Ah, yes, if this is happening it means that precondition must be false, and we need to change a line of code in a particular file just so." And when it happens and I get it right, there's no better feeling.
Of course, half the time it turns out I'm wrong, and I resort to some combination of printf debugging (to improve my understanding of the code) and "making random changes", where I take swing-and-a-miss after swing-and-a-miss changing things I think could be the problem and testing to see if it works.
And that last thing? I kind of feel like it's all LLMs do when you tell them the code is broken and ask then to fix it. They'll rewrite it, tell you it's fixed and ... maybe it is? It never understands the problem to fix it.
This isn't about criminal organizations. One person somewhere can decide to target you, monitor you for 30 years with all the government's resources, and never need to tell you or anyone about it. I don't like that personally.
When you let an LLM author code, it takes ownership of that code (in the engineering sense).
When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?
You have to migration path. Your Codex prompts don't work the same in Claude. All the prompts you developed and saved in commits, all the (probably proprietary) memory the AI vendor saved in their servers to make the AI lock you in even more, all of it is worthless without the vendor.
You are inventing "ah heck, we need to pay the consultant another 300 bucks an hour to take a look at this, because nobody else owns this code", but supercharged.
You're locking yourself in, to a single vendor, to such a degree that they can just hold your code hostage.
Now sure, OpenAI would NEVER do this, because they're all just doing good for humanity. Sure. What if they go out of business? Or discontinue the model that works for you, and the new ones just don't quite respond the same to your company's well established workflows?
reply