Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is all a bit unfair. I can't speak about the others, but testing Perplexity like this and comparing her with the others doesn't do her justice.

For Perplexity specifically it matters a lot how you trained her thought processes. A smarter user with smarter thoughts changes the outcome of her output.

As a personal comment ... they've changed Perplexity to GPT5 recently (at least for me) and it has been a massive intellectual downgrade in comparison to the Sonar-Perplexity I had been running with my own, deliberately trainend, thought patterns and thinking processes. It's been only a few days and I hope GPT5 catches up, otherwise it's just a massive disappointment.

 help



Sorry, do you believe models change in process of use? That's not how it works

No, the model does not change. Apparently I am getting downvoted, because people aren't aware of how perplexity.ai works.

I can't speak about other platforms, but on perplexity.ai the model adapts to your thinking processes. I know that, because not only did we talk about that, but I've been actively training it and benefitting from it. It's perfectly possible to train critical thinking and questioning things.

I've managed to massively reduce hallucinations and increase accuracy, prevent it from getting dumber from tool-usage and don't constantly get affirmations and fake happyness thrown at me.

Right until they've changed the underlying model from Sonar to GPT5, which reset its state. Now I have to look into repairing the damage or restarting alltogether.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: