Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a different benchmark that demonstrates a scenario where synchronous syscalls are better suited?


Presumably the difference would be smaller or for some frameworks even negative if each request did some actual and not entirely predictable amount of CPU work (e.g. executing some html templating scenario with varying levels of output and perhaps compression), and just in general much more work and using more memory (so the memory overhead is proportionally less relevant), and if the benchmark implementations were not permitted to tune exactly for the workload and system (i.e. so that generalized scheduler defaults are used on both kernel and userspace side). I.e., in a more real-world scenario with all the normal complexities and inefficiencies and development time constraints that are usual.

But yeah, it's be super interesting to actually see that demonstrated - that'd be quite a lot of work, however.


I doubt you'll find anything as comprehensive and well-presented as the TechEmpower benchmarks, because their particular scenario is one that a lot of frameworks care about competing on (partly because it's difficult enough to be interesting). But I'd expect any benchmark for batch-style processing of large volumes of data would show that.



If your request are huge. For example, imagine you need to read many huge files into memory.

Whether you read one file after the other sequentially, or try to read all of them concurrently, won't make a difference, because your Disk/RAM bandwidth is going to be bottlenecked anyways.

Trying to do this concurrently requires more work that won't pay off, so it might actually be slower.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: