Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Language models can only parrot back their training input. There's no generality to them at all; the best they can do is some crude, approximate interpolation of their training examples that may or may not be "correct" in any given instance. There are things that the typical AI/ML approach might be genuinely useful for (e.g. generating "probable" conjectures by leveraging the "logical uncertainty" of very weak and thus tractable logics) but mainstream language learning is a complete non-starter for this stuff.


I suggest looking into the examples of the PaLM blog post and the paper. PaLM is extremely impressive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: