Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The simplest solution is often the best. In this case, Sam did something that was so dramatically bad resulting in a high level of legal peril, which created an existential risk for OpenAI and Microsoft, or something in his personal life came to light that was beyond the pale. I love the AGI theories but in all likelihood it’s probably a boring thing: he made a terrible choice in his life somewhere and it’s caught up to him, with major consequences.


Or the simplest solution is that the he board is just as incompetent




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: