> LLMs are not actually doing a great job of translating ideas into tangibly useful software
Here is the source code for a greenfield, zero-dependency, 100% pure PHP raw Git repository viewer made for self-hosted or shared environments that is 99.9% vibe-coded and has had ~10k hits and ~7k viewers of late, with 0 errors reported in the logs over the last 24 hours:
Frankly, I created dozen of such projects in the last weeks. Recently I just deleted them all. I feel like there's no point. I cancelled my Claude subscription, too.
I got back learning from books and use LLMs for "review my code in depth and show me its weak points" occasionally.
It's using a mature data model from an existing framework (git), and it's essentially a simpler clone of other similar projects.
That's brownfield to me. Greenfield would develop a completely new system. This is a utility for an existing system, one whose design is clearly a copy of existing utilities. Both of those make this brownfield.
How many apps must people put on their phones, or payment cards people must carry, to pay to charge their vehicles with the convenience of a petrol station?
"Use the Electrify Canada mobile app to schedule your home charging and find a public charging station. Sign up for an account to enjoy exclusive, members-only public charging features and pricing."
I have owned an EV for a few years and a PHEV before that. Public charged primarily for a full year before switching to home charging and I've done multiple 500+ mile road trips. I've charged at EA, Tesla, EVgo, IONNA, MB-HPC, Pilot, Rivian, Red E, Nouria[ChargePoint activated], and those are just the DC Fast Chargers I remember off the top of my head.
Tesla and ChargePoint are the only ones that require an app. For those, my car's app can activate them if I don't want to download them.
Of course, I'm referring to the United States, I have not done a lot of charging in Canada.
The linage can be traced back to Basile Bouchon's paper tape invention in 1725. The article doesn't mention the role of punched cards in The Holocaust, though, which my blog post goes into:
There's still some work to do on the rendering side of model objects. Developing the syntax highlighting rules for 40 languages and file formats in about 10 minutes was amazing to see.
Edit, great example. What is your long term maintenance strategy, do you keep the original prompts around so you can refine them later or do you dig into the source?
In the R Markdown you write an R function to parse all snippets, then refer to snippets by name. If the snippet can't be found, building the documentation fails, and noisily breaks a CI/CD pipeline.
What's nice is that you can then use this to parse C++ definitions into Markdown tables to render nicely formatted content.
The general idea is that you can have "living" documentation reference source code and break on mismatch. Whether you use knitr/pandoc or python or KeenWrite/R Markdown[1] is an implementation detail.
In the Elixir ecosystem (where documentation is considered a "first-class citizen" in the language), you can run code examples as part of your test suite in a similar fashion ("doctest"): https://elixir-recipes.github.io/testing/doctests/
Syntax highlighting rules, initially vibe-coded 40 languages and formats in about 10 minutes. What surprised me is when it switched the design from a class to the far more elegant single line of code:
reply