I don't know why anyone would believe anything this guy is saying, though, especially now that we know he's going to receive a 7% stake in the now-for-profit company.
There are two main interpretations of what he's saying:
1) He sincerely believes that AGI is around the corner.
2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.
Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.
This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.
I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if
OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).
There are two main interpretations of what he's saying:
1) He sincerely believes that AGI is around the corner.
2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.
Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.