Fiduciary duty but for AI, interesting. I think there's some potential there, though of course you'll end up confronting the classic sci-fi trope of "what if the system judges what's best for the user in a way that is unexpected / harmful"? But, solve that with strong guardrails and/or scoping and you might have something.
I'm starting to get to the point where I'll only listen to AI energy use critiques if the commentator tells me up front they abstain from all forms of social media, especially video-based social media, first.
Note that I did not criticise the AI energy. I criticised tech as a whole. Tech is part of the problem (the problem here being "we are killing our only planet").
Are they? Or do you just mean that it's few and far between that we hear about them? If it's the former, I think there's a much bigger universe of this kind of stuff than most people realize. Otoh, if you're just commenting on the lack of coverage, then, yeah I agree I wish more publicity was paid to small software like this. Maybe we need a catchy term - "organic software"? "Locally grown software"?
I talked to my friends who aren't in tech a lot about what they would want with software. A lot of the benefits of small software like this would actually be compliance and reporting issues with non-profit. Sifting through large amounts of data with very unstructured inputs.
The actual community building is fairly not as automated unless you have very specific problems. Like even in the example above, having an automated message is useful but staffing the team to handle when things are NOT in a good spot would probably be the real scaling cost.
We're in a transition phase, but this will shake out in the near future. In the non-professional space, poorly built vibecoded apps simply won't last, for any number of reasons. When it comes to professional devs, this is a problem that is solved by a combination of tooling, process, and management:
(1) Tooling to enable better evaluation of generated code and its adherence to conventions and norms
(2) Process to impose requirements on the creation/exposure of PRDs/prompts/traces
(3) Management to guide devs in the use of the above and to implement concrete rewards and consequences
Some organizations will be exposed as being deficient in some or all of these areas, and they will struggle. Better organizations will adapt.
The unfortunate reality is that (1) and (2) is what many, many engineers would like to do, but management is going EXACTLY in the opposite direction: go faster! Go faster! Why are you spending time on these things
I think this is an interesting point, my one area of disagreement is that there is no "anti-LLM sentiment" in the programming community. Sure, plenty of folks expressing skepticism or disagreement are doing so from a genuine place, but just in reading this site and a few email newsletters I get I can say that there is a non-trivial percent in the programming world who are adamantly opposed to LLMs/AI. When I see comments from people in that subset, it's quite clear that they aren't approaching it from a place of skepticism, where they could be convinced given appropriate evidence or experiences.
But there's a difference. Being opposed to AI-generated art/music/writing is valid because humans still contribute something extraordinarily meaningful when they do it themselves. There's no market for AI-generated music, and AI-generated art and writing tends to get called out right away when it's detected. People want the human expression in human-generated art, and the AI stuff is a weak placeholder at best.
For software the situation is different. Being opposed to LLM-generated software is just batshit crazy at this point. The value that LLMs provide to the process makes learning to use them, objectively, an absolute must; otherwise you are simply wasting time and money. Eric S. Raymond put it something like "If you call yourself a software engineer, you have no excuse not to be using these tools. Get your thumb out of your ass and learn."
Ok, I’ll bite. What’s there to learn that you can tie directly to an increase of productivity?
I can say “learn how to use vim makeprg feature so that you can jump directly to errors reported by the build and tool” and it’s very clear where the ROI. But all the AI hypers are selling are hope, prayers, and rituals.
The skill is learning to supply the LLM with enough context to do anything a developer does: turn specs into code, check its work including generating and running tests, debug and analyze the code for faults or errors, and run these in a loop to converge on a solution. If you're about to do something by hand in an IDE, STOP. Think about what the LLM will need to know to perform that task for you.
It may take some human intervention, but the productivity results are pretty consistent: tasks that used to take weeks now take hours or days. This puts in reach the ability to try things you wouldn't countenance otherwise due to the effort and tedium involved. You'd have to be a damn fool not to take advantage of the added velocity. This is why what we do is called "engineering", not a handicraft.
I’m not an AI hyper, I just don’t code manually anymore. Tickets take about as much time to close as before, but the code shipped now has higher test coverage, higher performance, better concurrency error handling, less follow-up refactor PRs, less escapes to staging/prod and better documentation; some of it is now also modeled in a model checker.
What if the instructions they give you would be to submit to them while they assaulted you, sexually or physically? Are you supposed to comply and then challenge them in court later?
That is a thing that happens. Rarely, I suppose, and #notallpolice and all that, but the idea that we should live in a country where everyone just has to "comply" with the instructions or be murdered is ridiculous.
Have you considered that it's a bit dismissive to assume that developers who find use out of AI tools necessarily approve of worse code than you do, or have lower standards?
It's fine to be a skeptic. Or to have tried out these tools and found that they do not work well for your particular use case at this moment in time. But you shouldn't assume that people who do get value out of them are not as good at the job as you are, or are dumber than you are, or slower than you are. That's just not a good practice and is also rude.
I never said anything about being worse, dumber, and definitely not slower. And keep in mind worse is subjective - if something doesn't require edge case handling or correctness, bugs can be tolerated etc, then something with those properties isn't worse is it?
I'm just saying that since there is such a wide range of experiences with the same tools, it's probably likely that developers vary on their evaluations of the output.
Okay, I certainly agree with you that different use cases can dictate different outcomes when using AI tooling. I would just encourage everyone who thinks similar to you to be cautious about assuming that someone who experiences a different result with these tools is less skilled or dealing with a less difficult use case - like one that has no edge cases or has greater tolerance for bugs. It's possible that this is the case, but it is just as possible that they have found a way to work with these tools that produces excellent output.
I have had a lot of success lately when working with Opus 4.5 using both the Beads task tracking system and the array of skills under the umbrella of Bad Dave's Robot Army. I don't have a link handy, but you should be able to find it on GitHub. I use the specialized skills for different review tasks (like Architecture Review, Performance Review, Security Review, etc.) on every completed task in addition to my own manual review, and I find that that helps to keep things from getting out of hand.
reply