Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm imposing but could you try these runs again with this small change: Simply append “Make sure to check your assumptions.” to the question.

Note, it does not mention what assumption specifically. In my experiments, after the models got it wrong the first time (i.e. they weren't "patched" yet) adding that simple caveat fixed it for all of them except the older Llama models.

This is not the first time I've observed this; I found the same when the Apple "red herrings" study came out.

If these gotcha questions can be trivially overcome by a simple caveat in the prompt, I suspect the only reason AI providers don't include it in the system prompt by default is as a cost optimization, as I postulated in a previous comment: https://news.ycombinator.com/item?id=47040530

 help



In my experience, asking "what did we forget?" after Claude/Codex finishes a task usually results in a few extra tweaks that are beneficial.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: