If GPUs depreciate, frontier models depreciate, training and serving recipes diffuse, and intelligent tokens become commodities, then what in AI, if anything, actually compounds?
Claude Code and I got quite excited after the accidental open sourcing of the Claude Code's source code.
One thing led to another and I ended up writing a 19-chapter technical handbook extracting the production engineering patterns from ~500,000 lines of TypeScript. Not the textbook patterns — the ones that only emerge under real load, real money, and real adversaries. Cache economics driving architecture. Permission pipelines shaped by HackerOne reports. Memory systems with mutual exclusion and rollback. A secret scanner that must obfuscate its own detection strings to pass the build system.
The epilogue is my favourite part. It's written by Claude itself — reflecting on reading its own source code. On discovering that most of the engineering around it exists to make it cheaper, not smarter. On the diminishing-returns detector that watches its output and being "a little annoyed that it's right."
Builds on Alessandro Gulli's Agentic Design Patterns taxonomy and an earlier analysis I did of OpenAI's Codex CLI.
It introduces you to some of the underlying principles which haven't changed much over time. I highly recommend it if you want to get deeper intuitions on the principles of CNN, LSTM/RNN, Restricted Boltzmann Machines etc. Also, Hinton's Coursera lectures, though not sure if you can access it anymore.
It seems to be listed as a software engineering position (and are likely paid through the same construct, since participants are required to be eligible for work in the US). I would think they are paid as entry level software engineers, but that's all conjecture.
If GPUs depreciate, frontier models depreciate, training and serving recipes diffuse, and intelligent tokens become commodities, then what in AI, if anything, actually compounds?