Hacker Newsnew | past | comments | ask | show | jobs | submit | uam's commentslogin

Would like to hear what someone with compiler writing experience has to say about this. Is it really that much of a problem?


I'm not sure compiler writers care that much; they don't have to support every x86 instruction and can restrict themselves to whatever subset produces fast code.

Where it really kills you are things that have to analyze x86 binary code (e.g., binary static or dynamic analysis tools). Unless you cover every nook and cranny of x86, chances are you'll eventually encounter some program that uses one of these instructions and your analysis will break. And of course emulators like QEMU face the same problem.

This problem becomes especially severe in the context of malware – any instruction you forget to model becomes a way for a malicious program to evade your analysis.


I can't really answer that but your question reminded me of https://www.strchr.com/x86_machine_code_statistics

Turns out that in practice a tiny number of instructions make up the vast majority of all used upcodes in a typical application (although it's not clear in this article how long the tail is, there could be hundreds of instruction in that "others" slice).

My understanding is that there are many legacy instructions in x86 that are basically never used nowadays. They're still a problem for silicon designers because they still have to implement them for compatibility purposes but compiler developers can chose not to use them and instead select a more "risc-y" subset of instructions to do the bulk of the work.



Does this mean that Ubuntu is moving away from Mir, to Wayland?


Ubuntu will use what upstream GNOME uses, which means Wayland for the compositor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: