Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another micro-architecture attack. Since the advent of Spectre and Meltdown, I really wonder what is the practicality of exploiting these vulnerabilities. As an end-user, if I have malware running on my machine trying to trigger these exploits, then in many ways I have already lost. The malware program has access to all my personal data anyway.

Personally I wonder whether the cost of mitigation is worth it. According to the article (and their simplified HT methodology) certain workloads experience a 25% performance hit.

The only cases I currently consider as exploitable are VMs in the cloud (perhaps a reason to own dedicated servers after all this time) and running JS in the browser (perhaps a reason to disable JS).

There will always be side-channel attacks. Our devices are physical devices and will leak side-effects, like electromagnetic radiation ( https://en.wikipedia.org/wiki/Tempest_(codename) ). This recent spate of MDS flaws don't necessarily fit in my threat model.



I feel like there is an impedance mismatch between what CPU designers think memory protection guarantees and what developers think memory protection offers. For production code, I never got much higher level than “C with classes” and if you asked me 15 years ago if leaking bits of information through the branch predictor or memory buffer was a failure to conform to the x86 memory protection model I would’ve said no. Memory protection is mainly there to keep one app from crashing another. If you’ve got untrusted machine code running on your system, you’ve already been compromised. I feel like CPU designers are still in that mindset. They still measure their success by how well they do on SPEC.

Maybe instead of slowing everything down for the sake of JavaScript, we need a special privilege mode for untrusted code that disables speculative execution, sharing a core with another hyper-thread, etc.


> If you’ve got untrusted machine code running on your system, you’ve already been compromised.

You are exactly correct. However, the advent of "The Cloud" changed that.

"The Cloud" by definition runs untrusted machine code right next to yours. IBM and other players have been screaming about Intel being insecure for decades--and it fell on deaf ears.

Well, suck it up buttercup, all your data are belong to us.

While I hear lots of whingeing, I don't see IBM's volumes rising as people move off of Intel chips onto something with real security. When it comes down to actually paying for security, people still blow it off.

And so it goes.


Why should people move to IBM? Remember, their POWER processors were also vulnerabile to both Meltdown and L1TF - IBM systems are probably the only non-Intel servers that were. (Note that I really do mean non-Intel here, since AMD wasn't affected.) Their z/OS mainframes were probably vulnerable too but they don't release public information security about them. The only reason no researchers had discovered this is that IBM hardware is too expensive to test.


Red Hat released info about z/OS mainframes, they're also vulnerable to Meltdown as well as Spectre. ARM also has one new design that's vulnerable to Meltdown, and everyone has Spectre vulnerable designs.


I kind of blame Google for creating the whole trend of hosting all this mission critical stuff on x86 PCs: https://www.pcworld.com/article/112891/article.html. (Altavista was run in DEC iron. Google pioneered running these services in disposable commodity hardware.) That being said, POWER got hit with some of this stuff too.


Speaking about Google, how vulnerable are they, and how many CPUs will they need to replace?

Did someone already demonstrate that speculative-execution bugs are observable in Google's cloud stack?


[flagged]


I think you're being a little harsh.

These things end up having unintended consequences, it isn't about 'fuck Google', its about identifying the root cause of a problem. X86 PCs come from a long line of non multi user, non multi tasking computers. Whereas DEC mainframes are perhaps the more natural choice for what Google wanted to do.


So, let me get this straight.

You're saying Google should have foreseen spectre et al 20 years ago and therefore should have used DEC mainframes as it's infrastructure?

And further it's all google's fault that the cloud uses x86 infrastructure?

Wat?


No I said its not about 'fuck Google', its about identifying the root cause of a problem.

I didn't forsee this, and I don't recall anyone else predicting it, so no I don't think Google should have foreseen it either, but nevertheless it has happened, so we should endeavour to understand why, so it doesn't happen again. It isn't about blame, it isn't about pointing fingers.

Now we've identified an issue, then the next time an industry moves from big iron the commodity X86 PCs we can ask the question, is this going to be a problem?


I think he or she is drawing an analogy between Intel and Google both "cutting corners" to save costs, which worked well for them in the short term but had unforeseen consequences for everyone else over the longer term. This could be an instance of the famous "tragedy of the commons".


If DEC had won, would we have the same issue?

"In some cases there’s non-privileged access to suitable timers. For example, in the essentially obsolete Digital/Compaq Alpha processors the architecture includes a processor cycle counter that is readable with a non-privileged instruction. That makes attacks based on timing relatively easy to implement if you can execute appropriate code."

I still call bullshit on his entire hypothesis.

https://hackaday.com/2018/01/10/spectre-and-meltdown-attacke...


Hopefully you'll agree that there is a world of difference between an electromagnetic side-channel and something that can be achieved by simply running some JS.

In particular, disabling JS would be pretty disabling for an average modern web user, so an easy, practical attack through this vector is especially relevant.


it would rule if we moved away from javascript. it’s turned the web, the world’s best publishing platform, into the world’s crappiest app platform. good riddance.


the internet wouldn’t be as popular as it is today if it were not for it running apps. until some better language comes out supported by all browsers we won’t be moving away from JS. this “remove JS” horse is long dead, can we stop beating it now?


Yet every time I ask my web developer friends to ensure that functionality works online without JavaScript for basic things I am met with hostility.

If we were able to run without JavaScript and have basic use of websites again then these attack vectors wouldn’t be so scary.

Maybe it’s a personal failing of mine; but I don’t understand why people don’t consider JavaScript untrusted code when other vectors of running software on my machine come from a gpg-signed chain of custody.


Well, you would have to architect an app from the ground up to be functional with/without javascript and test any new functionality with it off. You’re talking 2x the work to appease maybe .0005% of the web browsing public. I wouldn’t be hostile if you asked me to do that... but I wouldn’t entertain the idea seriously either.


If you build using server side rendered HTML then progressive enhancement with JavaScript is not actually that difficult. It takes a different mindset that most webdevs don't have. Getting the UX nice for the fallback is hard.


Yes _if_, but many websites have moved on to client side rendering because if done right it delivers a better user experience for the 99% of users that have JS turned on, because there is no latency between page transitions.

Sure, passive content such as nytimes.com can work without JS (although some of their interactive articles would not), but anything more complicated will often be done with client side rendering these days.


> no latency between page transitions

Not true, latency scales with CPU capacity on the client. SPAs now exist with client side rendering to mask the bloat in the site plus its dependencies.

If you have a SPA arch but you did a page load per click, you would die. But all sites creep up to 500ms page load times regardless of their starting stack and timings.


It's still almost 2x work for every feature, because you need to implement it, test it for both modes and keep maintaining it for both modes. Usually people do that for Google Crawler. But recently it learned to understand JavaScript, so even that argument is moot nowadays. Your best hope is to wait until browsers decide that JavaScript is harmful and won't enable it by default without EV certificate (like they did with Java applets back in the days). I don't see that happening, but who knows.


It wouldn't be 2x the work if this was standard design since common problems/hurdles and their solutions would then had already been mapped out. It pays off to use the least powerful language needed to accomplish a goal.


Your right its not 2x the work, it is much more.

Speaking from about a decade of experience with progressive enhancement and all the other things. It is 'much' more. There is an expectation of equivalent functionality/experience in these cases, and you just can't spread your resources that thin to get half a product out that works without Javascript. You're literally development a completely independant application that has to sit on top of another application. Everything comes out 'worse'

These days we invest in ARIAA and proper accessibility integration and if you run without Javascript that is going to get you next to nothing in a 'rich' web application.


A possible alternative is to instead disable JIT for JS. The only reason the speculative execution attacks so far worked via JS is that we got a tight, optimised native code with specific instructions close to each other.

Interpreting the code instead should completely kill any chance of exploits of this class. It will also completely kill the performance though. Even tor dropped that idea https://trac.torproject.org/projects/tor/ticket/21011


When I was running Gentoo Hardened in the old days, there used to be a "jit" USE flag for Firefox, which could be disabled before building it. I was running PaX/grsecurity and I was suspicious of JIT in general as it requires runtime code generation, and breaks the excellent memory protection from PaX, so I kept it disabled. The performance was terrible, but tolerable for most websites. The worst part was in-browser JavaScript cryptography, without JIT there's zero optimization, a web login takes minutes.

But later on, Firefox has streamlined its architecture and made it more difficult to disable JIT. After one update, disabling JIT causes a broken build that crashes instantly. I've spend a lot of time reading bug trackers and looking for patches to unbreak it. The internal refactoring continued, after Firefox Quantum, JIT seems to be an integral part of Firefox, and the Gentoo maintainers eventually dropped the option to disable JIT as well.

I wonder if an ACL-based approach could be used, as an extension to NoScript. The most trusted website has JavaScript w/o JIT, the less trusted website has JavaScript w/o JIT, the least trusted website has no JavaScript. Tor Browser's security lever used to disable JIT at the "Medium" position. But rethinkig it, this approach has little security benefit, it's easy to launch a watering hole attack (https://en.wikipedia.org/wiki/Watering_hole_attack) and eject malicious JavaScript from 3rd-party elements.

I wonder if extending seccomp() and prctl() based approach can be a solution. SMT can be enabled but no process is running on a SMT thread by default. Non-confidential applications such as scientific computing or video games can tell the kernel to put their processes on SMT threads.


> I wonder if extending seccomp() and prctl() based approach can be a solution. SMT can be enabled but no process is running on a SMT thread by default. Non-confidential applications such as scientific computing or video games can tell the kernel to put their processes on SMT threads.

A valid option, though in general I'd rather allow it for everything except browsers.


Because all code is untrusted, just because you know who it probably came from doesn't mean there aren't bugs, backdoors or exploits (code review can catch only so much). That goes triple for the average user who doesn't even understand the first thing about security.

So instead of trying to achieve the impossible (perfect safe code that still has unlimited access) the direction is stricter sandboxes. Then at least you only need one near perfect piece of safe code (the sandbox) instead of tens of thousands.


What you're saying sounds nice, but it seems to come from a world before Spectre, Meltdown, and all the new discoveries since. These have basically shown that it is impossible to build this sandbox on the modern processors that everybody uses from desktops to cloud data centers.

Instead, the only way of having a performant, secure system today is to disable hardware mitigations and ensure you only run trusted software, the opposite of your proposal. The sandbox still helps for other issues (e.g. buffer overflows).


> If we were able to run without JavaScript and have basic use of websites again...

I think that ship has sailed..

We should work to build secure sandboxes instead.


Sandboxes don't stop side channel attacks


Yes, it seems we have a shared memory problem. Shared Cloud servers and client-side JavaScript seem under duress.


You are met with hostility because what you are saying is absurd. I don't think any reasonable person wants to bring the web back to what it was in 1999. Lots of things about the web have gotten worse, but same could be said for automobiles. The collective usefulness JS brings to the browser is far more valuable than the problems it creates.

Also, to keep this all in perspective, we are talking about one company that made egregious engineering decisions to maintain their market leadership which has put their customers at risk. To say we should do away with JS in the browser because Intel made some really poor decisions to undermine their competition is just crazy. As far as I'm concerned, this is typical Intel and they are getting what they deserve yet again. I just feel bad for their customers that have to deal with this.


> Also, to keep this all in perspective, we are talking about seven companies that made egregious engineering decisions to maintain their market leadership which has put their customers at risk.

Fixed it for you, the current list I have is AMD, ARM, Freescale/NXP, IBM both POWER and mainframe/Z, Intel, MIPS, Oracle SPARC, and maybe old Fujitsu/HAL SPARC designs for Spectre, with at least four of those CPU lines also subject to Meltdown.


You know you are misrepresenting the issue here. The current batch of vulnerabilities does not affect AMD except for the fact that the OS patches affect all cpus, vulnerable or not. Needing to disable hyperthreading on Intel CPUSs is the catastrophic situation I’m referring to... up to 40% loss of performance in thread intensive tasks.


The current set of vulnerabilities target Intel specific microarchitectural features, like the first version of Foreshadow/L1TF targeted the SGX enclave, which by definition isn't in other company's CPUs.

Given that AMD is fully vulnerable to Spectre, there's absolutely no reason to believe it isn't similarly vulnerable to microarchitectural detail leakage if people were to look. And going back to what I was replying to:

> we are talking about one company that made egregious engineering decisions to maintain their market leadership which has put their customers at risk

We demonstrably aren't, seeing as how ARM, and IBM, both POWER and mainframe/Z CPUs are also vulnerable to Meltdown. That and the significant prevalence of Spectre says this is not "typical Intel" but "typical industry", a blind spot confirmed in 7 different companies, and 8 teams to the extent the IBM lines are done by different people.

The "Intel is uniquely evil" trope simply doesn't hold water.


Fair enough, I don't know enough about the performance hit mainframes, ARM and IBM cpu's are taking to say if it's similar to what Intel is experiencing or not.

That said, in the consumer space, being (this) vulnerable to Javascript attacks is catastrophic. My original point is that we should not be crippling something very useful (javascript in the browser) because of flawed architectures that mostly affect one company in a way that decimates performance.


> My original point is that we should not be crippling something very useful (javascript in the browser) because of flawed architectures that mostly affect one company in a way that decimates performance.

Lots of us have a different opinion on the usefulness vs. risk of running random and often deliberately hostile JavaScript in your browser, see the the popularity of NoScript, and how many of us use uMatrix with JavaScript turned off by default. Most of the time I follow a link where I don't see anything, I just delete the tab, most of those pages aren't likely worth it.

"Mostly affect one company" is something completely unproven, since AMD is not getting subjected to the same degree of scrutiny, AMD has a minuscule and for some reason declining in 19Q1 market share for servers, while desktop and laptops are modest but showing healthy market share growth: https://www.extremetech.com/computing/291032-amd-gains-marke...

While ARM is announcing architecture specific flaws beyond basic Spectre: Meltdown (CVE-2017-5754) and Rogue System Register Read (CVE-2018-3640, in the Spectre-NG batch but by definition system specific): https://developer.arm.com/support/arm-security-updates/specu...


> popularity of NoScript

This may be popular among neckbeards, but regular people could care less. Regular people care about being tracked and that's about it.

> "Mostly affect one company" is something completely unproven

You speak like a person of authority on this subject, but AMD has come out and said they are specifically immune to these threats: https://www.amd.com/en/corporate/product-security

AMD has "hardware protection checks in our architecture" which disputes your assertion that AMD just isn't a targeted platform. The reality is any computing platform can be vulnerable to undiscovered vulnerabilities so making that point is kind of pointless.

Also, disabling JS on the browser pretty much completely eliminates e-commerce on the web. Again, I can't fathom the masses seeing any benefit in this.


it is considered untrusted code, and that’s why the bowsers VM is so locked down that one can’t even access the FS. visiting a website is akin to installing an application, you either trust it and visit the site or don’t and don’t visit.


Maybe it wouldn't be so scary for you, but is everybody else's computer is compromised, did you really win much?


Is there PoC website where I can get hacked via JavaScript exploiting those side-channel vulnerabilities? For me it sounds too theoretical.



According to that article, browsers already implemented mitigations. It's not clear whether BIOS and OS mitigations are necessary for that.


The fact that disabling arbitrary machine code execution would be “disabling” for the modern web is proof of how totally screwed up the whole thing has become.


And the fact that this observation doesn't have tech people marching in the streets.


No kidding, JS off by default and not running code you don't trust will always be a good idea. I also agree that I'm not really concerned about exploits that require running code on my machine; if the latter happens, I have far more serious things to worry about.

The exploits that do worry me, are ones that can be done remotely without any action from the user. Heartbleed is a good recent example. Fortunately those tend to be rare.

Security is always relative to a threat model, and not everyone wants perfect (if that can even be possible) security either, contrary to what a lot of the "security vultures" tend to believe.


Supply chain attacks become a lot easier to pivot on. Put some Spectre code in a npm or RPM package, then wait to see what pops up. So much stuff is sudo curl pipe bash these days that the Spectre threat is real.

All the more reasons to run your own servers and compile your own binaries. Like we used to do.


Why do you need Spectre for that? Just install backdoor. It's not like someone runs npm from restricted user.


I thought it was standard practice for framework package managers to run as non privileged users and install binaries in local dirs.


Never saw that. You're typing npm install and npm runs under your current user (probably not root, but who cares about root when valueable data belongs to you, not root) and runs any package install scripts it just downloaded from npm website. There's no separate user to run npm, at least in default installs.


You mean “compile your own binaries” using code which you downloaded without auditing, just like NPM? That’s actually what we used to do; blaming NPM is both reflecting a misunderstand of the problem and blaming the update system which means you can fix a problem orders of magnitude faster — the half-life of random tarballs people used was much longer.


Who said without auditing? There are a plethora of signing and hashing mechanisms one can use to verify a package's authenticity.

Compiling once from a tarball and reusing that can definitely reduce the number of times you would need to trust something from a third party.


You are aware that NPM already does that, right? It’s even safer because the network requires immutability so there’s no way to trojan a package without shipping a new version for everyone to see.

The real problem is why I mentioned auditing: the attacks we’ve seen over the years have been updates from maintainers who followed the normal process. Auditing is the most reliable way to catch things like that because the problem isn’t the distribution mechanism but the question of being able to decide whether you can trust the author.


Put some Spectre code in a npm or RPM package, then wait to see what pops up.

This is fucking scary. If it's a package used by Wordpress you could end-up with 30% of the web open to an attack.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: