Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, okay but... look, I concede that someone who shouldn't be doing anything except watching passive entertainment could absolutely take insane advice from an LLM (or a sociopathic human) and seriously hurt themselves.

But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?

If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.



Don't worry, LLMs are perfectly ok for getting information. Just ask drugs.com about penisomab https://bsky.app/profile/harrisonk.bsky.social/post/3mfs6adw...


Again: not doing anything at all with health or chemistry. They aren't what I am interested in, even peripherally.

What you seem to be missing is that LLMs are better at/for some things than others. Legal review, 3D geometry, therapy and apparently chemistry are off the list.

It doesn't make sense to project that onto domains where it excels.


> What you seem to be missing is that LLMs are better at/for some things than others.

I guarantee it is using the same system to write code and teach you about electronics that it is using to teach people about chemistry, and if you can't see how that means the resulting information is suspicious at best, then I don't even know what to say anymore.


The worst thing social media has done to our species is convince everyone that they are supposed to have something to say about everything.

It's genuinely alright to stfu sometimes when you don't have anything productive to add to the conversation.


Indeed, which is exactly what you should have done, instead of writing a comment that is simply an insult.


My concern is that about half of folks are below average intelligence. And new generations will be exposed to AI from a young age, possibly lacking the rigor and experience of those who came before. I'm afraid they will trust the AI too much, and find themselves quickly in over their heads with no way back.

Perhaps I'm just pearl clutching. I guess time will tell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: