That's not the impression I got from that thread. They seem to agree that this is bad for benchmarking, but remain undecided on whether that's good or bad for real-world processing.
It depends on the work. So as always benchmark suites are to be taken with a grain of salt. More specific benchmarks, such as compiling a standard set of real software packages, can give a clearer picture of performance for those more specific use cases.
Until we see more specific data on how these chips perform for certain tasks, this is just FUD.
Yes, that's why I qualified my "real-world tasks" with "random". What is clear is that:
* Ryzen has a longer branch prediction history than Intel's processors.
* This will give it an advantage on repetitive executions.
* It's a challenge to robustly measure tasks since using repeated executions to gain confidence intervals can interfere with the measurement itself.
What's not clear is to what extent real-world tasks are repetitive enough to benefit or random enough to be negatively impacted. It's likely a mix of both.
By no means am I attempting to spread FUD — I find it quite interesting and wanted to spark a bit of discussion on it.
Pardon. I didn't mean to imply you were intentionally doing that. Just trying to make sure there's skepticism of benchmarks as well as skepticism that the boost from branch prediction is dishonest.
> More specific benchmarks, such as compiling a standard set of real software packages, can give a clearer picture of performance for those more specific use cases.
Is there a good place to go for this? I've tried to find software development focused benchmarks before, but I've come up mostly empty.
It depends on the work. So as always benchmark suites are to be taken with a grain of salt. More specific benchmarks, such as compiling a standard set of real software packages, can give a clearer picture of performance for those more specific use cases.
Until we see more specific data on how these chips perform for certain tasks, this is just FUD.