Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nonsense. It isn't marketing speak to cover for anything. It's a pretty good description of what is happening.

The reason models hallucinate is because we train them to produce linguistically plausible output, which usually overlaps well with factually correct output (because it wouldn't be plausible to say e.g. "Barack Obama is white"). But when there isn't much data to show that something that is totally made up is implausible then there's no penalty to the model for it.

It's nothing to do with not being able to understand your request, and it's rarely because the training data is wrong.



"Hallucinate" is definitely marketing.

it translates to "Creates text which contains incorrect or invalid information"

The latter just doesn't sound as good in headlines/articles/tutorials (eg. marketing material).


We already have words for when a computer program produces unexpected/incorrect output: “defect” and “bug”


The weird thing is, it’s not a bug of software, it’s a limitation.

The software is working as designed, statistics are just imperfect


So if I replied to your comment with "you are incorrect" I would be putting you in a worse light than saying "you are hallucinating"? The second is making it sound better? Doesn't feel that way to me.


My problem with "hallucination" isn't that it makes error sound better or worse, it's that it makes it sound like there's a consciousness involved when there isn't.


It's definitely not marketing. It has been in use for a lot longer than LLMs existed.


Links?

Also those two statements are not mutually exclusive.

Errors in statistical models being called hallucinations in the past does not mean that term is not marketing speak for what I said earlier.


Here's an example from 2019.

https://www.youtube.com/watch?v=wRDfzjxzj3M

> Also those two statements are not mutually exclusive.

> Errors in statistical models being called hallucinations in the past does not mean that term is not marketing speak for what I said earlier.

The implicit claim was that they call this hallucination because it sounds better. In other words that some marketing people thought "what's a nicer word for 'mistakes'?" That is categorically untrue.

I don't think there's any point arguing about whether or not the marketers like the use of the word "hallucinate" because neither of us has any evidence either way. Though I was also say the null hypothesis is that they're just using the standard word for it. So the onus is on you to provide some evidence that marketers came in an said "guys, make sure you say 'hallucinate'". Which I'm 99% sure has never happened.


It's a term of art from the days of image recognition AI that would confidently report seeing a giraffe while looking at a picture of an ambulance.

It doesn't feel right to me either, to use it in the context of generative AI, and I'd support renaming this behaviour in GenAI (text and images both) — though myself I'd call this behaviour "mis-remembering".

Edit: apparently some have suggested "delusion". That also works for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: