Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Consistent here meaning, I guess, that all voting power will go to Sam Altman personally, right?


Well, he is the one that did most of the actual research and work, riiiiight?


    I'm ignorant on this topic so please excuse me.  Why did `AI` happen now?  What was the secret sauce that OpenAI did that seemed to make this explode into being all of a sudden?

  My general impression was that the concept of 'how it works' existed for a long time, it was only recently that video cards had enough VRAM to hold the matrix(?) within memory to do the necessary calculations.

  If anybody knows, not just the person I replied to.


A short history:

1986: Geoffrey Hinton publishes the backpropagation algorithm as applied to neural networks, allowing more efficient training.

2011: Jeff Dean starts Google Brain.

2012: Ilya Sutskever and Geoffrey Hinton publish AlexNet, which demonstrates that using GPUs yields quicker training on deep networks, surpassing non-neural-network participants by a wide margin on an image categorization competition.

2013: Geoffrey Hinton sells his team to the highest bidder. Google Brain wins the bid.

2015: Ilya Sutskever founds OpenAI.

2017: Google Brain publishes the first Transformer, showing impressive performance on language translation.

2018: OpenAI publishes GPT, showing that next-token prediction can solve many language benchmarks at once using Transformers, hinting at foundation models. They later scale it and show increasing performance.

The reality is that the ideas for this could have been combined earlier than they did (and plausibly future ideas could have been found today), but research takes time, and researchers tend to focus on one approach and assume that another has already been explored and doesn’t scale to SOTA (as many did for neural networks). First mover advantage, when finding a workable solution, is strong, and benefited OpenAI.


This is not accurate. OpenAI and other companies could do it not entirely because of transformers but because of the hardware that can compute faster.

We've had upgrades to hardware, mostly led by NVidia, that made it possible.

New LLMs don't even rely that much on that aforementioned older architecture, right now it's mostly about compute and the quality of data.

I remember seeing some graphs that shows that the whole "learning" phenomena that we see with neural nets is mostly about compute and quality of data, the model and optimizations just being the cherry on the cake.


> New LLMs don't even rely that much on that aforementioned older architecture

Don’t they all indicate being based on the transformer architecture?

> not entirely because of transformers but because of the hardware

Kaplan et al. 2020[0] (figure 7, §3.2.1) shows that LSTMs, the leading language architecture prior to transformers, scaled worse because they plateau’ed quickly with larger context.

[0]: https://arxiv.org/abs/2001.08361


Also, this sort of thing couldn't be done in the 80s or 90s, because it was much harder to compile that much data.


I know this is just a short history but I think it is inaccurate to say "2015: Ilya Sutskever founds OpenAI." I get that we all want to know what he saw etc and he's clearly one of the smartest people in the world but he didn't found OpenAI by himself. Nor was it his idea to?


Ilya may not be the only founder. Sam was coordinating it, Elon provided vital capital (and also access to Ilya).

But out of the co-founders, especially if we believe Elon's and Hinton's description of him, he may have been the one that mattered most for their scientific achievements.


Short histories remove a lot of information, but it would be impractical to make it book-sized. There were numerous founders, and as another commenter mentioned, Elon Musk recruited Ilya, which soured his relationship with Larry Page.

Honestly, those are not the missing parts that most matter IMO. The evolution of the concept of attention across many academic papers which fed to the Transformer is the big missing element in this timeline.


> but it would be impractical to make it book-sized

Not really:

History: https://arxiv.org/abs/2212.11279 (75 pp.)

Survey: https://arxiv.org/abs/1404.7828 (88 pp.)

Conveniently skim-read over the course of the four weekends on one month.


I thought Elon Musk is who personally recruited Ilya to join OpenAI, which he funded early on, alongside others?


What a time to be alive!


Mostly branding and willingness.

w.r.t. Branding.

AI has been happening "forever". While "machine learning" or "genetic algorithms" were more of the rage pre-LLMs that doesn't mean people weren't using them. It's just Google Search didn't brand their search engine as "powered by ML". AI is everywhere now because everything already used AI and now the products as "Spellcheck With AI" instead of just "Spellcheck".

w.r.t. Willingness

Chatbots aren't new. You might remember Tay (2016) [1], Microsoft's twitter chat bot. It should seem really strange as well that right after OpenAI releases ChatGPT, Google releases Gemini. The transformers architecture for LLMs is from 2014, nobody was willing to be the first chatbot again until OpenAI did it but they all internally were working on them. ChatGPT is Nov 2022 [2], Blake Lemoine's firing was June 2022 [3].

[1]: https://en.wikipedia.org/wiki/Tay_(chatbot)

[2]: https://en.wikipedia.org/wiki/ChatGPT

[3]: https://www.npr.org/2022/06/16/1105552435/google-ai-sentient


There's a deleted scene from Terminator 2 (1991) where we get a description of the neural network behind Skynet.

https://www.youtube.com/watch?v=1UZeHJyiMG8

https://en.wikipedia.org/wiki/Skynet_(Terminator)


Thanks for the information. I know Google had TPU custom made a long time ago, and that the concept has existed for a LONG TIME. I assumed that a technical hurdle (i.e. VRAM) was finally behind allowing this theoretical (1 token/sec on a CPU vs 100 tokens/sec on a GPU) to become reasonable.

Thanks for the links too!


Zirp ended.


No, the hundreds of people who have worked on NNs prior to him arriving were the people who did the MOST actual research and work. Sam was in the right place at the right time.


Introducing Sam Altman, inventor of artificial intelligence! o_o


Is it in the history books?


History books, what are those? This is what the AI told me, and the AI is an impartial judge that can't possibly lie.


Yeeees, right next to the page where he's shown to be a fantastic brother to his sister.


yeah, split with Microsoft.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: