Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and things like video decoders

That's precisely the example I gave. Although many modern decoders use GPUs, which are much simpler than CPUs (simpler even than 90s era CPUs). The GPU performance model is very simple to comprehend.

> No$GBA would lose a lot if it were rewritten into a high level language.

That's a nice sentiment, but I don't think it is supported by the facts. You could probably write a JIT in Python that would perform much, much better (but that would be overkill, given that you're emulating a very slow, very small machine), and a trivial implementation in Java would probably perform just as well.

The ability to achieve significantly better performance for general-purpose tasks (let's call that "branchy code") with low-level languages today is more myth than reality. What is true that some high-level languages consciously give up on some performance to make development easier, but that's a design choice. That's not to say that optimizing JIT and AOT compilers get everything right -- they don't -- but they get it right often enough that they're very hard to beat.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: