To perhaps stir the "what do words really mean" argument, "lying" would generally imply some sort of conscious intent to bend or break the truth. A language model is not consciously making decisions about what to say, it is statistically choosing words which probabilistically sound "good" together.
>A language model is not consciously making decisions about what to say
Well, that is being doubted -- and by some of the biggest names in the field.
Namely that it isn't "statistically choosing words which probabilistically sound good together". But that doing so is not already making a consciousness (even if basic) emerge.
>it is statistically choosing words which probabilistically sound "good" together.
That when we do speak (or lie), we do something much more nuanced, and not just do a higher level equivalent of the same thing, plus have the emergent illusion of consciousness, is also an idea thrown around.
"Well, that is being doubted -- and by some of the biggest names in the field."
An appeal to authority is still a fallacy. We don't even have a way of proving if a person is experiencing consciousness, why would anyone expect we could agree if a machine is.
In movies and written fiction, "intelligent" robots, anthropomorphized animals, elves, dwarves and etc can all commit murder when given the attributes of humans.
We don't have real things with all human attributes but we're getting closer and as we get close "needs to be a human" will get thinner as an explanation of what is or isn't human for an act of murder, deception and so-forth.
This is an interesting discussion. The ideas of philosophy meet the practical meaning of words here.
You can reasonably say a database doesn't lie. It's just a tool, everyone agrees it's a tool and if you get the wrong answer, most people would agree it's your fault for making the wrong query or using the wrong data.
But the difference between ChatGPT and a database is ChatGPT will support it's assertions. It will say things that support it's position - not just fake references but an entire line of argument.
Of course, all of this is simply duplicating/simulating for humans in discussions. You can call it is a "simulated lie" if you don't like the idea of it really lying. But I claim that in normal usage, people will take this as "real" lying and ultimately that functional meaning is what "higher" more philosophical will have to accept.
Somewhat unrelated, but I think philosophy will be instrumental in the development of actual AI.
To make artificial intelligence, you need to know what intelligence is, and that is a philosophical question.
Lying implies an intention. ChatGPT doesn't have that.
What ChatGPT definitely does do is generate falsehoods. It's a bullshitting machine. Sometimes the bullshit produces true responses. But ChatGPT has no epistemological basis for knowing truths; it just is trained to say stuff.
And if you want to be pedantic, ChatGPT isn't even generating falsehoods. A falsehood requires propositional content and therefore intentionality, but ChatGPT doesn't have that. It merely generates strings that, when interpreted by a human being as English text, signify falsehoods.
Getting into the weeds, but I don't agree with this construal of what propositional content is or can be. (There is no single definition of "proposition" which has wide acceptance and specifies your condition here.) There is no similar way to assess truth outside of formalized mathematics, but the encoding of mathematical statements (think Gödel numbers) comes to mind; I don't think that the ability of the machine to understand propositions is necessary in order to make the propositions propositional; the system of ChatGPT is designed in order to return propositional content (albeit not ex nihilo, but according to the principles of its design) and this could be considered analogous to the encoding of arithmetical symbolic notation into an formally-described system. The difference is just that we happen to have a formal description of how some arithmetic systems operate, which we don't (and I would say can't) have for English. Mild throwback to my university days studying all of this!
Does a piece of software with a bug in it which causes it to produce incorrect output lie or is it simply a programming error? Did the programmer who wrote the buggy code lie? I don't think so.
The difference is everything. It doesn't understand intent, it doesn't have a motivation. This is no different than what fiction authors, songwriters, poets and painters do.
The fact that people assume what it produces must always be real because it is sometimes real is not its fault. That lies with the people who uncritically accept what they are told.
An exceedingly complicated Autocomplete program, which an "AI" like ChatGPT is, does not have motives, does not know the concept of "lying" (nor any concept thereof), and simply does things as ordered by its user.
I don't think there's a difference.