Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In other words, it can't think or reason on its own. It depends on prior human thought and work in order to approximate "intelligence".


No. It could think on it's own, like MuZero does. It becomes intelligent by learning from prior human work, like every intelligent being.


Well, that's the great question.

If there's enough of the "essence of reason" represented in human text, then it's possible in theory some sufficiently large LLM could eventually could grok it and fully generalize over rational thought (i.e, AGI).

Personally I'm extremely skeptical. There's quite a lot of machinery in the brain other than language, and conscious rationality also runs in a constant loop, not a single forward pass through the weights.

I can't disprove the possibility. But if I were a betting man, I'd go all in on LLMs improvements adopting a sigmoid curve rather than an exponential one.


Does that mean if I learn by reading books written by other people, I'm not intelligent?


If learn to remember content or order of words, then no. If learn to improve your internal model of the world, then you are intelligent.


So you agree that LLMs are intelligent?

https://thegradient.pub/othello/


Thanks for the article. Very informative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: