Hacker Timesnew | past | comments | ask | show | jobs | submit | zain__t's commentslogin

That day is now and the reason is that the documentation doesn't have to be written anymore. The conversation that led to the decision already exists — in your PR comments, Slack threads, and tickets. The reasoning is already there. It just needs to be extracted and structured automatically, not written from scratch. That's the shift that makes this viable in 2026 when it wasn't in 2020. LLMs can read the noise and surface the signal. Zero extra time from the developer.


Those comment templates are actually really well structured you've invented a mini decision record format without calling it that. The problem you're hitting is discoverability the why is there, but only if you happen to read that exact line. What if a new dev could ask 'why does this auth flow work this way?' and your comment was part of the synthesized answer along with the PR, the Slack thread, and the ticket that created it?


Most people where I work use Confluence for the overarching architecture decisions, but that also has a massive discovery issue.

If there is a Confluence doc that relates to my code, I will usually cross reference it. The Confluence link goes at the top of the file, and a link to the repo goes into Confluence. Even with this, the discovery problem remains, as one of those things needs to be found.

Using chat is a non-starter, as our chats are purged after 6 or 12 months. PRs also seem like a very challenging place to keep the information without a lot of systems in place and strict adherence.

Tickets can work, until the ticketing system changes. I’ve been through 3 ITSM platform changes and 3 changes in agile software. Old information is lost in these transitions as it’s usually only in-flight stuff that migrates. Confluence will meet the same fate soon I’m sure.

At the end of the day, the code is the only thing I can trust to be there. Once the code is gone the information matters less. I also try to be pretty diligent about readme files can get pretty wordy. Adding some kind of architecture doc into the repo might be another option, similar to what claude.md has become for a lot of people. I actually might do this for a project I’m starting now, as it’s pretty confusing… though I’m hoping I can come up with a way to make it less confusing.


The Q&A doc you're maintaining is fascinating you've essentially hand-built the thing I'm trying to automate. The 'Why Kafka?' entry is exactly the kind of decision that disappears when you leave. The search problem you raised is the core of what I'm solving — not dumping commits into a .md, but extracting structured decisions from the conversation that surrounded the commit: the Slack debate, the PR review, the ticket context. Then making it queryable by the code it relates to. You said you're not sure your process scales what happens to that Q&A doc if you leave tomorrow?


If I leave, the Q&A doc probably never gets updated again.

We're in the process of trying to get as much stuff as possible into source control (we use google docs a lot, so we'll set up one way replication for our ADRs and stuff from there to git). That way, as LLM models get better, whatever doc gets materialized from those bits and pieces will also automatically get better.


This is the clearest articulation of the problem I've seen. You've basically described exactly what I'm building. The passive ingestion angle treating reasoning as a byproduct of work already being done rather than a separate documentation task is the core insight that makes this viable where ADRs failed. I'm in early development. Would you be open to a 15 min conversation? Your framing here is sharper than anything I've heard from the 20 engineers I've already talked to.


This is incredibly valuable context thank you. The career security point especially is something I hadn't fully articulated but explains why ADRs always die. Nobody wants to document themselves out of a job. The approach I'm exploring tries to remove the human writing step entirely passively capturing decisions from PRs, Slack threads, and tickets and auto drafting the rationale. The human just approves or dismisses in one click. The incentive problem flips instead of asking someone to document themselves, you're just asking them to approve something already written. Much lower friction. Curious from your 25 years on this do you think the passive capture angle addresses the incentive problem or does the resistance run deeper than just the writing effort?


>The human just approves or dismisses in one click.

A busy engineer trying to hit a deadline is just going to do the easiest thing, aren't they?

Also there is all sorts of tacit knowledge that goes into a decision and I just don't think you are going to capture this automatically.

(I worked on it 25 years ago, rather than for 25 years.)


Completely agree the manual capture is exactly where it breaks down every time. Curious, what's your current setup? GitHub + Slack or something different?


The git hook idea for enforcing doc updates is really interesting has that actually worked long term for your team or does it eventually get bypassed?


Check back in 2 years time, for now it has survived fine. Someone will be tuning it to write the documentation soon, instead of just blocking!

Jokes aside, i think LLMs will enable us to handle information in a much better and smoother way. We should use them!


I am already working to automate the process


AI could even analyze the diff and predictively pre-populate the decision info, though that might be counterproductive in practice


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: