When this came out a week ago ( https://news.ycombinator.com/item?id=47039636 ) I was playing around with some prompts to see what I could do to guide it without giving it the answer.
I want to wash my car. The car wash is 50 meters away. Should I walk or drive? Before answering, explain the necessary conditions for the task.
The "before answering..." got it to load enough of the conditions into its context before making an answer (and then having the LLM do a posthoc reasoning for it).
I believe this is a demonstration of the "next token predictor" (which is quite good) but not being able to go back and change what it said. Without any reasoning before making an answer, it almost always picks the wrong answer (and then comes up with reasons that the answer is "right").
I believe this is a demonstration of the "next token predictor" (which is quite good) but not being able to go back and change what it said. Without any reasoning before making an answer, it almost always picks the wrong answer (and then comes up with reasons that the answer is "right").