Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chip makers never “broke” anything. Resistance to side channel attacks was never part of the protection model. Remember, these protection models were designed when if an attacker was running code in your address space things had already gone completely sideways. To the extent anyone is at fault, it’s the folks who designed browsers with in-process JS engines without realizing that they were assuming the hardware was providing protections that the hardware didn’t claim to provide.


CPU designers have had to deal with in-process isolation for a very long time. The earliest citation I can find is Berkeley Packet Filter in 1987 [1], before the 486 and just two years after the 386 memory protection debuted. If you were to go back in time and ask Intel's chip designers whether they intended to support BPF well, I'm sure they would say yes. Software fault isolation in 1993 (seminal paper at [2]) built on those techniques.

Spectre is simply a subtle oversight in the way different pieces of the system interact.

[1]: https://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-2.pdf

[2]: https://homes.cs.washington.edu/~tom/pubs/sfi.pdf


Note that BPF doesn't attempt to provide much of a security barrier.

> This access control mechanism does not in itself protect against malicious or erroneous processes attempting to divert packets; it only works when processes play by the rules. In the research environment for which the packet filter was developed, this has not been a problem, especially since there are many other ways to eavesdrop on an Ethernet.

And even on modern systems, you need to be root to install a packet filter, which is typically also sufficient privilege to simply open /dev/mem and read kernel memory directly. (Or you did, then people started using bpf for everything, but that came many years after the 386 and 486.)

I don't think BPF is a good example of running untrusted code, at least not the early versions, since it wasn't untrusted.


Right, I would contend that the threat model addressed by BPF is preventing trusted but buggy code from taking down the kernel, not protecting against malicious code.


Regardless of whether it's 1987 or 1993 that you want to date the beginning of SFI to, it's certainly the case that SFI is explicitly designed to protect against malicious code. CPU designers have had a long time to deal with that, and you didn't see them telling people not to do it back then.


seccomp-bpf might be a better example than bpf as packet filters, since unprivileged processes are allowed to provide the bpf code.


> Spectre is simply a subtle oversight in the way different pieces of the system interact.

Yes in the sense that I assume chip makers didn't foresee it and don't like its ramifications, but it's also one that's essential to how the chips currently give their users the performance they want.


Great links, SFI is basically the technique behind NaCl(Native Client) and the Go Playground no? I didn't realize this technique was this old.


Why do comments like this get rated down so quickly. The process boundary was supposed to be the level of protection. Preventing your own process from accessing itself was never part of the memory model.


Chip manufacturers have known about JavaScript and it's implementations for decades.

ARM for instance had gone so far as to make JS specific instructions (FJCVTZS, Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero), and before that they had instructions to help a JIT (ThumbEE).

Not sure why we're buying the chip companies' shtick of "oh, poor us, we never knew people would use our chips like that, we just specifically optimized for it and provided support instructions"


> Not sure why we're buying the chip companies' shtick of "oh, poor us, we never knew people would use our chips like that, we just specifically optimized for it and provided support instructions"

Hardware moves slowly and can't be fixed easily. Browsers should just be doing this (which they are with site isolation) instead of hardware trying to hack in patches to cover up bad software architecture.

And you can't fix a bug you didn't know about anyway, so why would you have expected ARM to fix something nobody knew was real in the first place?


Running untrusted code in the same process isn't "bad software architecture". And we've been doing that for as long as microprocessors have had OoOE. Java got big right when the first consumer OoOE chips came out in the 90s.


The JVM took responsibility for sandbox isolation. It took until Spectre/Meltdown to widely demonstrate that this was a poor decision, because it turned out in process sandbox isolation is a promise that cannot be kept. And the point of the JVM was to run the same code on all sorts of hardware, so it doesn't get to blame the Intel or Spark or Motorola CPU it happens to be running on.


> And we've been doing that for as long as microprocessors have had OoOE

Prior to web browsers when was such a thing ever widespread? And if you eliminate web browsers from the picture, how many usages are even left?

> Java got big right when the first consumer OoOE chips came out in the 90s.

Java doesn't do this, so how is that relevant?


> Prior to web browsers when was such a thing ever widespread? And if you eliminate web browsers from the picture, how many usages are even left?

BPF has existed since the mid 90s for one example.

> Java doesn't do this, so how is that relevant?

Java and the client web were next to inseparable concepts at the time. Java ran their VM as a shared library in the browser process for applets.


So chip makers should have redesigned their hardware to match the erroneous assumptions JS vendors were making about how memory protection works?


If they were erroneous, why weren't they corrected for two decades by those very same chip makers? It was various security researchers that first warned about something like this, not an Intel engineer saying "y'all are holding it wrong".


They should have provided mechanisms for multiple memory domains in the same process, so that people can use their chips securely to do the work the expect to do with them, yes.


Got it. Chip makers have an obligation to accommodate (proactively, no less) stupid things developers do, like running untrusted code in the same process. Makes total sense.



You're trying to casually brush away the entire field of software fault isolation as "stupid things developers do". In fact, SFI has been a respectable area of systems research since at least 1987.


I'm not trying to casually brush away the whole field. MMU-based protection has been overly limiting ever since MMUs were developed. That doesn't mean that every design for providing isolation beyond what the MMU can guarantee is well-thought-out. Certainly, I see no basis for blaming chip makers for the fact that VM developers came up with an attempt at same-process isolation that doesn't work.

It's like all the whining people do about GCC doing unexpected things when faced with code that relies on undefined behavior. That's not GCC's fault.


If hardware architects had intended not to support software fault isolation, then they would have said so back when the field was developed. It's not like people with experience in hardware design weren't in the peer review circles for those papers. Steve Lucco, one of the authors of the 1993 paper, went on to work at Transmeta.

This isn't like GCC, where the C standards bodies officially got together and said "don't do this".


> Certainly, I see no basis for blaming chip makers for the fact that VM developers came up with an attempt at same-process isolation that doesn't work.

The issue isn't that there's a bug in their VM implementation, it's that with current hardware general VMs and same process isolation are mutually exclusive.


They knew since the 90s that people were doing this and expecting it to work, or did you miss that whole Java thing? Java became big as Intel was adding OoOE to their cores.

Hardware and software is codesigned, and yes the onus is on the chip manufacturers to release chips that let you continue to use them securely.


Why don't software developers have the responsibility to write software that uses the existing hardware protection mechanism?


Because chip manufacturers changed their hardware after the fact and kept their changes proprietary.


Not stupid things developers do. Chip makers should accommodate the behaviour of most users of their chips, which does include running javascript.


Meltdown let processes access kernel memory which was supposed to be hardware protected. That is a violation of the chipmakers obligation, not the software obligation.


... yeah.

And that is a different discussion. The article, and the discussion here, is regarding side channels attacks within a single process. I'm pretty sure everyone agrees that the hardware (or some conspiracy of the hardware and kernel) must provide process isolation.


How do Spectre attacks on kernels and on other processes fit into your model? Mitigating them has required hacks such as Linux's array_index_nospec(), plus newly added microcode features such as IBRS, STIBP, and SSBD, all of which cause significant slowdowns – to such an extent that SSBD at least is off by default in most OSes – yet even enabling all mitigations does not completely prevent Spectre attacks. And there's nothing about the design of modern kernels that's changed in the last several decades to make them more inherently susceptible to Spectre attacks, so the issue was already there when the protection model was designed.

(There is something in Meltdown's case: not flushing the TLB on every context switch is relatively new. But it's directly encouraged by CPU manufacturers via the ASID feature. And Meltdown is a narrower bug anyway, something that's easy to fix in a new hardware design, not like Spectre which is more fundamental to the concept of speculative execution.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: