My theory… they cracked AGI and have been running it in their backend for a bit. That’s what is training GPT5. It also explains weird data thats been coming out that appears to show the base gpt4 model remembering context across different threads, as well as some odd statements Altman has made about the AI learning from conversations. The board found out, and realized he was lying to the board, the gov, and the public. Fired.
JUST A THEORY
My theory… they cracked AGI and have been running it in their backend for a bit. That’s what is training GPT5. It also explains weird data thats been coming out that appears to show the base gpt4 model remembering context across different threads, as well as some odd statements Altman has made about the AI learning from conversations. The board found out, and realized he was lying to the board, the gov, and the public. Fired. JUST A THEORY
Can I read more about this anywhere?