This is not lying, that is just what run rate revenue means! It makes sense to use as a metric when a company’s user base is growing as fast as Anthropic’s is.
It makes sense to be extremely misleading about actual accounting figures? In what world is it okay to say you have $19b in ARR when you have only ever generated $5b for the entire duration of your company's existence?
Did Enron start a business school I'm unaware of something?
I think this is because when you shrink it down, the model ends up space constrained and each “neuron” ends up having to do multiple duties. It can stil be tuned to perform well at specific tasks, but no longer generalizes as well. It’s somewhat unintuitive but models that are larger are often simpler than smaller ones for this same reason.
It’s not hypocritical at all. You can be a fan of a technology and still acknowledge its downsides. Every technology has places it is useful and places it is harmful.
Fun fact: because WFC is graph-based, you can do stuff like creating a graph where it uses time as a dimension, so you can create animations that “wrap” in time.
In this rabbit example I made 8 years ago, the WFC solver ensures that the animation must loop, which means you will always end up with an equal number of births and deaths.
Yup. This is exactly what is going to happen. It’s strange that so many people here can’t seem to extrapolate from the current state of things. It’s inevitable.
My guess is humans will still be necessary for financial, life-or-death, and mission-critical software. But what % of developer jobs work on those systems? 5%?
It hit the front page here a few weeks ago, but I don’t think most people took it seriously and got hung up on the $1000/day in tokens part.
I am convinced that approach is the future of nearly all software development. It’s basically about how if you’re willing to spend enough tokens, these current models can already complete any software task. With the right framework in place, you don’t need to think about the code at all, only the results.
I really don’t like that the industry is heading this way, but the more I consider that approach, the more I’m convinced it is inevitable.
My question is always: what are you building? You need to tell the AI what to build. What if it does it in a way that isn't what you want, or makes the buttom blue instead of red, or any number of other decisions?
AI can write the code, but not tell you what code you want it to write. In other words, how long are your specs? Either the LLM decides "whatever" or you have massive amounts of documentation to coordinate.
We still need to decide what to build, and some of the how. That is not automate-able, yet everyone seems to gloss over that bit.
Yes, the LLM writes the specs. In fact that LLM writes everything, and then the humans only flag anything they want changed, other than that it’s completely automated.
Imagine you were working with a very talented software shop. You might tell them your preferences sometimes and some things you want changed, but otherwise they mostly just build the right things the right way. And unlike a real software shop, the LLM system can implement changes incredibly fast.
I have this conversation (or a variation threof) with some friends: I suspect that the vast majority of “mainstream commercial software development” is flat out just cooked. We won’t be writing code, nor debugging it. It’ll will just be people throwing more LLM compute at everything.
Open source, hobbyist and personal projects will probably remain the last bastions of “human in the loop” and human-written code, and I suspect these circles will retract into smaller, tighter circles.
reply