Hacker Timesnew | past | comments | ask | show | jobs | submit | MattRix's commentslogin

This is not lying, that is just what run rate revenue means! It makes sense to use as a metric when a company’s user base is growing as fast as Anthropic’s is.

It makes sense to be extremely misleading about actual accounting figures? In what world is it okay to say you have $19b in ARR when you have only ever generated $5b for the entire duration of your company's existence?

Did Enron start a business school I'm unaware of something?


> In what world is it okay to say you have $19b in ARR when you have only ever generated $5b for the entire duration of your company's existence?

In the same world that it makes sense to say that your current speed is 57mph when you've only driven 15 miles since starting the trip.


hah that’s a great way to explain it

sir if you say a number is $19B and everyone who is invested knows what it means, is there a problem?

Brandon Sanderson recently did a talk about something similar: https://youtu.be/mb3uK-_QkOo

I think this is because when you shrink it down, the model ends up space constrained and each “neuron” ends up having to do multiple duties. It can stil be tuned to perform well at specific tasks, but no longer generalizes as well. It’s somewhat unintuitive but models that are larger are often simpler than smaller ones for this same reason.

It’s not hypocritical at all. You can be a fan of a technology and still acknowledge its downsides. Every technology has places it is useful and places it is harmful.


But it's trivially evident that the harmful use cases are dominating. Handwaving that away for profit is shitty.


what’s to keep people from selling or giving away those id tags? seems like a nefarious entity could buy them in bulk


It's already sorta happening with SIM-cards/phone numbers that are sometimes used for similar purposes.


Same thing that keeps me from letting my agent do the online talking for me. That is to say… nothing.


law enforcement.


Fun fact: because WFC is graph-based, you can do stuff like creating a graph where it uses time as a dimension, so you can create animations that “wrap” in time.

In this rabbit example I made 8 years ago, the WFC solver ensures that the animation must loop, which means you will always end up with an equal number of births and deaths.

https://xcancel.com/MattRix/status/979020989181890560


Nope. As someone who has tried to get tickets, most of the matches are sold out, and even the least desirable matches are quite expensive.


Yup. This is exactly what is going to happen. It’s strange that so many people here can’t seem to extrapolate from the current state of things. It’s inevitable.


My guess is humans will still be necessary for financial, life-or-death, and mission-critical software. But what % of developer jobs work on those systems? 5%?


I’d encourage you to read this post: https://factory.strongdm.ai

It hit the front page here a few weeks ago, but I don’t think most people took it seriously and got hung up on the $1000/day in tokens part.

I am convinced that approach is the future of nearly all software development. It’s basically about how if you’re willing to spend enough tokens, these current models can already complete any software task. With the right framework in place, you don’t need to think about the code at all, only the results.

I really don’t like that the industry is heading this way, but the more I consider that approach, the more I’m convinced it is inevitable.


My question is always: what are you building? You need to tell the AI what to build. What if it does it in a way that isn't what you want, or makes the buttom blue instead of red, or any number of other decisions?

AI can write the code, but not tell you what code you want it to write. In other words, how long are your specs? Either the LLM decides "whatever" or you have massive amounts of documentation to coordinate.

We still need to decide what to build, and some of the how. That is not automate-able, yet everyone seems to gloss over that bit.


Yes, the LLM writes the specs. In fact that LLM writes everything, and then the humans only flag anything they want changed, other than that it’s completely automated.

Imagine you were working with a very talented software shop. You might tell them your preferences sometimes and some things you want changed, but otherwise they mostly just build the right things the right way. And unlike a real software shop, the LLM system can implement changes incredibly fast.


I have this conversation (or a variation threof) with some friends: I suspect that the vast majority of “mainstream commercial software development” is flat out just cooked. We won’t be writing code, nor debugging it. It’ll will just be people throwing more LLM compute at everything.

Open source, hobbyist and personal projects will probably remain the last bastions of “human in the loop” and human-written code, and I suspect these circles will retract into smaller, tighter circles.


The humans working there do. To state otherwise is to absolve those humans of any responsibility.


Did I state otherwise though?


Did I say you stated otherwise?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: