Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"It doesn't lie, it just generates lies and printed them to the screen!"

I don't think there's a difference.



To perhaps stir the "what do words really mean" argument, "lying" would generally imply some sort of conscious intent to bend or break the truth. A language model is not consciously making decisions about what to say, it is statistically choosing words which probabilistically sound "good" together.


>A language model is not consciously making decisions about what to say

Well, that is being doubted -- and by some of the biggest names in the field.

Namely that it isn't "statistically choosing words which probabilistically sound good together". But that doing so is not already making a consciousness (even if basic) emerge.

>it is statistically choosing words which probabilistically sound "good" together.

That when we do speak (or lie), we do something much more nuanced, and not just do a higher level equivalent of the same thing, plus have the emergent illusion of consciousness, is also an idea thrown around.


"Well, that is being doubted -- and by some of the biggest names in the field."

An appeal to authority is still a fallacy. We don't even have a way of proving if a person is experiencing consciousness, why would anyone expect we could agree if a machine is.


>An appeal to authority is still a fallacy

Which is neither here, nor there. I wasn't making a formal argument, I was stating a fact. Take it or leave it.


Lying needs intent. ChatGPT does not think therefore it doesn’t lie in that sense.


Thats like saying robots don't murder - they just kill


Which is actually a very good analogy. A lot of things can kill you, but only a human can be a murderer.


In movies and written fiction, "intelligent" robots, anthropomorphized animals, elves, dwarves and etc can all commit murder when given the attributes of humans.

We don't have real things with all human attributes but we're getting closer and as we get close "needs to be a human" will get thinner as an explanation of what is or isn't human for an act of murder, deception and so-forth.


And pit bulls, but I digress. The debate gets lost in translation when we start having what do words mean debate.


This is an interesting discussion. The ideas of philosophy meet the practical meaning of words here.

You can reasonably say a database doesn't lie. It's just a tool, everyone agrees it's a tool and if you get the wrong answer, most people would agree it's your fault for making the wrong query or using the wrong data.

But the difference between ChatGPT and a database is ChatGPT will support it's assertions. It will say things that support it's position - not just fake references but an entire line of argument.

Of course, all of this is simply duplicating/simulating for humans in discussions. You can call it is a "simulated lie" if you don't like the idea of it really lying. But I claim that in normal usage, people will take this as "real" lying and ultimately that functional meaning is what "higher" more philosophical will have to accept.


Somewhat unrelated, but I think philosophy will be instrumental in the development of actual AI. To make artificial intelligence, you need to know what intelligence is, and that is a philosophical question.


Merriam-Webster gives two definitions for the verb "lie". The first requires intent, the second does not:

> to create a false or misleading impression

> Statistics sometimes lie.

> The mirror never lies.


It's a text generator. You ask it generate something and it does. It produces only stories. Sometimes those stories are based on actual facts.

This lawyer told it produce a defence story and it did just that.


Lying implies an intention. ChatGPT doesn't have that.

What ChatGPT definitely does do is generate falsehoods. It's a bullshitting machine. Sometimes the bullshit produces true responses. But ChatGPT has no epistemological basis for knowing truths; it just is trained to say stuff.


And if you want to be pedantic, ChatGPT isn't even generating falsehoods. A falsehood requires propositional content and therefore intentionality, but ChatGPT doesn't have that. It merely generates strings that, when interpreted by a human being as English text, signify falsehoods.


Getting into the weeds, but I don't agree with this construal of what propositional content is or can be. (There is no single definition of "proposition" which has wide acceptance and specifies your condition here.) There is no similar way to assess truth outside of formalized mathematics, but the encoding of mathematical statements (think Gödel numbers) comes to mind; I don't think that the ability of the machine to understand propositions is necessary in order to make the propositions propositional; the system of ChatGPT is designed in order to return propositional content (albeit not ex nihilo, but according to the principles of its design) and this could be considered analogous to the encoding of arithmetical symbolic notation into an formally-described system. The difference is just that we happen to have a formal description of how some arithmetic systems operate, which we don't (and I would say can't) have for English. Mild throwback to my university days studying all of this!


Saying ChatGPT lies is like saying The Onion lies.


The Onion (via its staff) intends to produce falsehoods. ChatGPT (nor its staff) does not.


Does a piece of software with a bug in it which causes it to produce incorrect output lie or is it simply a programming error? Did the programmer who wrote the buggy code lie? I don't think so.


The difference is everything. It doesn't understand intent, it doesn't have a motivation. This is no different than what fiction authors, songwriters, poets and painters do.

The fact that people assume what it produces must always be real because it is sometimes real is not its fault. That lies with the people who uncritically accept what they are told.


> That lies with the people who uncritically accept what they are told.

That's partly true. Just as much fault lies with the people who market it as "intelligence" to those who uncritically accept what they are told.


This is displayed directly under the input prompt:

ChatGPT may produce inaccurate information about people, places, or facts.


That's a good start. I think it needs to be embedded in the output.


An exceedingly complicated Autocomplete program, which an "AI" like ChatGPT is, does not have motives, does not know the concept of "lying" (nor any concept thereof), and simply does things as ordered by its user.


Language generated without regard to its truth value is different than language that cares about its truth value. Ask Harry Frankfurt.


There is a difference. Is fiction a lie?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: