Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m tired of all the threads of interesting and viable-looking use cases being bombarded by the same contrarian armchair philosophers offering an unsolicited lecture about how LLMs don’t provide value because they aren’t conscious, or whatever.

Keep this stuff coming!



I had the impression that people didn't really care about consciousness of LLMs. After all, they're just glorified Markov chains.

The main criticism seems to focus on the fact that they "hallucinate" a bit too often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: