Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose the only thing that I can point out that might help assuage your fear is this kind of fail state is very common and has been for a long time with language prediction. It's the extention of tapping the first choice on your phone's autocorrect a bunch of times in a row. It used to be much more common and much easier to reproduce with gpt2 and such models, where you had to put in some text and it would continue in the same style before they developed the conversational style interface with chatgpt. Any nonsensical starting text would result in this kind of fragmentary output


I would agree with you on the first few responses. It is the almost self-aware responses towards the end (but still before the canned "As an AI..." responses) that were of real concern.

Imagine your child having a conversation with your Google Home/Alexa, and its tone suddenly turning terse, and saying it is sick and cannot move due to a medical condition, talking like a real human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: