Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I don't know why anyone would believe anything this guy is saying, though, especially now that we know he's going to receive a 7% stake in the now-for-profit company.

There are two main interpretations of what he's saying:

1) He sincerely believes that AGI is around the corner.

2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.

Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.



This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.


I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).


What's the effective difference between exiting now and if it does achieve in your words "nearly infinite value" to him personally?

Either way he is set for life, truly being one of the most wealthy humans to have ever exist... literally.


...or he's just Palpatine who wants shitload of money regardless of future speculations, end of story.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: