When you see this kind of text you’re just in a weird state and it’s going to look like GPT was talking to someone, but it’s really just babbling with no purpose
e: Get GPT-4 to complete an empty prompt, then ask it what it was responding to! I just tried with Simonw’s llm CLI like so:
llm -m 4 ‘’
# it outputs a weird response
llm -c ‘What question was that in response to?’
In my case an explanation of euthanasia, and my supposed question was “What is euthanasia?”. I did it again and it said there was no original question, so there’s some randomnes.
It sees a lot of separate conversations in its training. It seems much simpler to assume that it's incorrectly learned that it should shift the conversation into a new one occasionally to match this tendency of the training data rather than a bug actually leaking and blending user conversations together.