Yeah, okay but... look, I concede that someone who shouldn't be doing anything except watching passive entertainment could absolutely take insane advice from an LLM (or a sociopathic human) and seriously hurt themselves.
But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?
If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.
Again: not doing anything at all with health or chemistry. They aren't what I am interested in, even peripherally.
What you seem to be missing is that LLMs are better at/for some things than others. Legal review, 3D geometry, therapy and apparently chemistry are off the list.
It doesn't make sense to project that onto domains where it excels.
> What you seem to be missing is that LLMs are better at/for some things than others.
I guarantee it is using the same system to write code and teach you about electronics that it is using to teach people about chemistry, and if you can't see how that means the resulting information is suspicious at best, then I don't even know what to say anymore.
My concern is that about half of folks are below average intelligence. And new generations will be exposed to AI from a young age, possibly lacking the rigor and experience of those who came before. I'm afraid they will trust the AI too much, and find themselves quickly in over their heads with no way back.
Perhaps I'm just pearl clutching. I guess time will tell.
But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?
If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.