Hacker Newsnew | past | comments | ask | show | jobs | submit | _fgss's commentslogin

Surely, this is an (admittedly broad and difficult) issue with CAs and the trust element of SSL authentication. By the theory of SSL, an 'attacker' is not supposed to have control over a 'trusted' CA, so SSL is 'supposed to' protect against this.


> By the theory of SSL, an 'attacker' is not supposed to have control over a 'trusted' CA, so SSL is 'supposed to' protect against this.

Every major government has a CA cert in your browser. SSL was obviously designed to be subverted in exactly this way. You won't even get a warning. Google pins their own certs in their own browser, but Moxy's Convergene.io or something like TACK would need to be implemented by Google and Mozilla for you to have a fighting chance.


Exactly. But the important point is that the current situation with CAs may be flawed, but since ultimately the user has control over which certificates to trust, it will be possible to use better trust models (like convergence.io) to eliminate censorship. After that, we can indeed use encryption to fight censorship.


Is convergence.io still a viable project? Last time I checked the github repo it looked to be abandoned.


I only thought of the Chinese govt issuing a certificate after writing half the post. Initially I was just going to comment on the easy possibility of MITM attacks. These would fail whenever anyone forced https into their address bar, but I think less than 5% of the people do that. That would mean just refusing connections for the <5% and still sniffing everyone else.


Strict transport security (http://en.m.wikipedia.org/wiki/HTTP_Strict_Transport_Securit...) is an attempt to prevent exactly this. However assuming that the user is always on a MITM'd connection, a preloaded list in the browser becomes necessary.


Yes, doing this attack after a site is already publicly known is hard because of HSTS (if Github uses it), but slowly caches will expire and browsers will reset themselves, and the amount of http traffic will increase a lot.


What about determining the salt from the domain for the given login? (perhaps slightly more refined than that to mitigate cases like *.google.com)


I think that's right. It would also need some ability to add mappings like "when I log in from gmail.com, hash like it is google.com"


Would you care to expand on these bad precedents that you are referring to?


Well, presumably if carriers provide reduced- or no-cost access to Wikipedia that's a direct violation of net neutrality, aka charging the same amount of money for data moving through the pipes no matter what it is or who it comes from.

Presumably the OP is concerned that this could degenerate from "breaking net neutrality is fine as long as it favors sites we like" to "breaking net neutrality is fine".

Note that I don't necessarily share the belief of the OP, but it's certainly a point worth some real consideration.


I do think there is some risk (though I'm not sure how much delta it adds to the risk either way) that it plays into normalizing the proposals for tiering, a specific kind of non-neutral net, where ISPs would give you access to different levels of "internet content" based on your subscription level, like cable TV packages for the internet. Example: a few sites are available in the Free tier, a basic whitelisted set of sites (news, webmail, popular blogs, etc.) are in the Lite tier, everything but high-bandwidth video and torrents comes with the Standard tier, and the full internet is unlocked only by the Premium tier. The free tier would be made up of "content partners" who are a mixture of nonprofits like Wikipedia, and for-profits that pay for their inclusion in the free tier (CNN, maybe).


Which sounds like how cable TV works (at least in the US).

In the old days when city governments granted these monopolies to cable TV companies, part of the deal was community access channels (Wayne's World!). Even if these benefitted hardly anyone, they provided some moral cover and justification.

If I were an old media guy, I'd see Wikimedia Zero as the germ of an idea -- maybe suggesting a way how to squeeze the genie back in the bottle and wrap the internet in a cable TV model. And, hey, in the US most people's ISP is also their cable company. So ... I don't see it as a definitely bad thing, but I see the seeds of potential bad as well as good.

(In general, a lot of bad stuff flies under the flag of a good cause fighting some other bad thing -- "because terrorism", "because children", "because hyper-inflation", etc. Of course a lot of _good_ stuff also flies under good flags. I'm just saying it's not obvious either way.)


No, of course not. I'll try to be short, though.

The first thing to ask is what differentiates Wikipedia content from the rest of the 'net so that it would be OK to break net neutrality principles towards the implementation of this project.

So, even if this breakage is somewhat worth the bending of the rules, companies must support the idea. Once they do it, what would be the moral ground to rightfully deny another proposals from content providers that offer them money to do that?

Well, this problem is not something really new. Net neutrality is already broken, though almost everybody agrees on its importance. There must be some kind of consistency if we want it to survive.


I can see some possible issues, too. If you want to take a hard line approach, this breaches net neutrality just as much as slowing or charging access to other sites. Sure, Wikipedia is already the defacto source of information for a lot of people, but this would cement a monopoly as such in those areas. This all sounds a bit silly when talking about Wikipedia, but it has implications for other services.

What if google or bing made a similar deal? Or any large corporation, for that matter? You set up an internet where the majors can buy their way into a monopoly position, not by convincing ISPs to degrade other services, but by convincing them to charge users less to use your service. This makes it harder to impossible for new companies to enter those markets.

All that being said, I think this (Wikipedia Zero) will be a net-positive thing, and I hope they succeed. I do see that this type of thing could have worrying implications for net neutrality, but ultimately they're not the first site to broker "free access" deals with mobile carriers.


> What if google or bing made a similar deal? Or any large corporation, for that matter?

Facebook already does this https://www.facebook.com/notes/facebook/fast-and-free-facebo... and I'm sure many others do as well


Oh, I'm well aware. In Australia, I remember seeing a number of the telcos advertising "mobile data with free access to facebook and twitter". One of the catchup TV services also has "unmetered access" deals with a number of ISPs.

I don't think that the world is going to end here, or collapse into some dystopian nightmare, but the question was asked: "what are these bad precedents?" and I attempted to answer.


If you get access to sites you care about for free, or heavily subsidized. Would you pay for access to full internet? This might give a lot of informational control to governments/internet providers. If let's say Google, and Facebook are accessible, would those companies fight for some libertarian website that was blocked?


It's also important to note that not only do they run a Tor exit enclave, they also run a .onion hidden service, which unlike the enclave is not deprecated in the next version, and means that DDG can be accessed without leaving the Tor network.


Weapons are violence. There's no room for violence in the future.


> There's no room for violence in the future

I'm sure humanity will find some


Since forcibly disarming someone is application of violence, I fail to see your end-game.


Even Star Trek had phasers. Weapons will exist as long as there is a need for a person to defend themselves against a foe.


How do you plan to arrange that?

Be specific.


Is a hammer a weapon or a tool?

Is a military mortar a tool when ski patrols use it to save lives by safely triggering avalanches?


Note that the author seems to misunderstand a few things about BitTorrent sync:

1. Firstly, Btsync is not open-source (unfortunately), as he claims. [1]

2. 24-hour secrets don't give someone read access for 24 hours. They must be redeemed within 24 hours after creation, but if they are redeemed on time they become permanent read-only secrets.

[1] - Answered in these threads: http://forum.bittorrent.com/topic/8816-will-syncapp-be-open-... http://forum.bittorrent.com/topic/17782-bittorrent-sync-faq-...


Ah my bad – I've corrected the open source claim and I'll reword about the 24 hr secrets. Thanks.


I'm curious why this is being downvoted. Is the information incorrect?


This is a very good point. Looks like vendors are just copying each other's practices, like 6 confirmations, without looking into why they need to be in place.

A lot of people mention the 6 confirmations requirement as a serious inconvenience of Bitcoin, when in reality, a lot of the time it is not necessary at all.


The overall message is stop worrying so much about the locations of other people's genitals.


Yes, your comment is witty, but I think it's critically flawed. Firstly, medicine and education should be public services, in the sense that their management should have the imperative of increasing living conditions for the populace rather than generating capital. It can also be added that arguably, the purpose of banking is to increase living standards, instead of being a business for those that control it.

Secondly, you're implying that the correlation between regulation and 'success' in those sectors implies causation between the two. Do you really think that the problem with health care in the US is that it's not regulated ENOUGH? Similarly, correct me if I'm wrong, but are you implying that education should be privatised? And lastly, do you think that the success of internet technology is a result of the fact that it has been less regulated than other industries (I'm not saying that it SHOULD be regulated, I'm just questioning whether or not that is a crucial factor)?

If that was intended only as a joke I understand, however I think it's important to think seriously about these things if we are to every try to fix them.

EDIT: typo


Using your logic, shouldn't access to knowledge and information (the internet) be a public service too? Shouldn't management have the imperative of increasing living conditions for the populace rather than generating capital apply to internet as well?

The logic behind the reasoning for healthcare, education, banking, telcom being heavily regulated is extremely flawed. The real reason those sectors are heavily regulated is so they can lock in profits with little effort.


The internet is more than access to knowledge and information, and I agree that the sectors responsible for knowledge and information should have the imperative of increasing living conditions, and not generating capital. This is why Wikipedia should not be a business; this is why universities should not be businesses and this why they should receive tax dollars. Note however, that whether or not they should be regulated is a totally different matter. I was questioning the notion that the problem with banking, medicine, and education in the US is excessive regulation.

On the other hand, companies like Amazon and Facebook have little to do with broadening access to knowledge and information (barring the latter in a trivial sense of the word), and are oriented towards generating capital, so I don't think the rhetorical questions that you open with are applicable here.

I'd like to hear you elaborate on why the reasoning for regulation is flawed in the cases of healthcare, education, banking, telecom.


"The real reason those sectors are heavily regulated is so they can lock in profits with little effort."

Exactly, how do so many people miss this?


>"in the sense that their management should have the imperative of increasing living conditions for the populace rather than generating capital"

Couldn't this be said of the food industry, the clothing industry, etc.? How are you not arguing for communism?


I suppose you could have an even less centralised system to resemble Tor hidden services. This transcript of Assange and Schmidt's meeting discusses this at a certain point, just grep for 'hash'[1]. That way the domain name itself is proof of its authenticity.

Couldn't you also have something akin to bitcoin's blockchain, where the identity of a domain is agreed upon by the majority of the creators of the blockchain? I don't know what the equivalent of mining would be though. Assange also suggests how domains could be made hard to make, so that they can be 'mined', creating scarcity so that "some arsehole" doesn't "register every short name themselves"[sic].

[1] http://wikileaks.org/Transcript-Meeting-Assange-Schmidt


>I suppose you could have an even less centralised system to resemble Tor hidden services. This transcript of Assange and Schmidt's meeting discusses this at a certain point, just grep for 'hash'[1]. That way the domain name itself is proof of its authenticity.

Using a hash as the name can be useful, especially where the name is only being read by a machine, e.g. you post a link and somebody can just click it, or it's part of your app which you're only using instead of an IP address in case the IP address changes. Or you can put it in a QR code or use NFC on mobile devices etc. The trouble is that it causes the name to be full of encoded data and humans can't remember it. You would still like some way of using memorable names for instances where someone is going to have to type the thing.

>Couldn't you also have something akin to bitcoin's blockchain, where the identity of a domain is agreed upon by the majority of the creators of the blockchain? I don't know what the equivalent of mining would be though. Assange also suggests how domains could be made hard to make, so that they can be 'mined', creating scarcity so that "some arsehole" doesn't "register every short name themselves"[sic].

I was thinking about something like that, it seems like the trouble is how do you calibrate the amount of work to be done. If you make it massive (like $200,000 worth of CPU time on Amazon) then you're excluding a lot of the people you wouldn't want to exclude, or causing them to waste a lot of money. But anything significantly less formidable just isn't going to solve the problem -- at $200 you can still imagine a slew of jackasses registering all the short names. Especially when they're evildoers who are using a botnet and don't actually have to pay anything for the computing resources. And that also doesn't solve the problem of scammers getting ".bank" or ".irs" or something, where the point isn't that they're getting too many names, it's that they're getting unreasonably misleading names.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: