Everyone from Greg to Sam to Ilya keep hanging on "AGI for the benefit of humanity". According to OpenAI's constitution: AGI is explicitly carved out of all commercial and IP licensing agreements, including the ones with Microsoft. Nuclear.
Well apparently, the board decides what will be AGI. But 3 (everyone now left except ilya) of those board members don't even work at Open AI and are only privy to what the rest share.
Yesterday, this is what Altman said, "On a personal note, like four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we pushed the veil of ignorance back"
He goes on to say, "By next year, the model capabilities will take such a leap forward that No one would have expected"
Now keep in mind, while we can all speculate on just how much the next iteration will be better, the idea that they could be sitting on something noticeably better is not far fetched at all. Open AI sat on GPT-4 for 8 months before announcing it to the public.
He says they can push Language Models much farther but that there are more breakthroughs required. But here's where it gets weird. He immediately starts talking about Super Intelligence and "discovering new physics" as the bar. He says, "If it can't discover new physics, I don't think it's a super intelligence". Nobody asked you about this Sam..
To Ilya, they built AGI internally, and Altman wanted to release/monetize it early and tried to undersell it as being far away from AGI to the rest of the board. Ilya considers it AGI, or something extremely close to it, and deserving of far more caution, and so decided to convince the board that Altman was underselling the capabilities of their newer models and was risking a premature release of AGI. If true, then it could be seen as Altman lying about something that is foundational to their mission (which is to safely and responsibly release AGI out into the world) and subsequently fired him.
Characterizing Dev Day as "too far" makes more sense in this scenario. Ultimately, the only reason SOTA LLM Agents aren't particularly dangerous is competence. If you suddenly bumped the latter up while laying the grounds for the former..
1. (6015) Stephen Hawking dying
2. (5771) Apple's letter related to the San Bernardino case
3. (4629) Sam Altman getting fired from OpenAI (this thread)
4. (4338) Apple's page about Steve Jobs dying
5. (4310) Bram Moolenaar dying
https://hn.algolia.com/