Agree, but a significant point missed in the article is that of data vulnerability. with E2EE the company db is useless to an external attacker.
For some companies (eg facebook, google, tiktok) i would be mostly worried about the company itself being untrustworthy. For others I would be mostly worried about the company being vulnerable.
In the context of "Age verification should be banned" though, we're already talking about legislative intervention. If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.
Perhaps what we're really saying is "Ban age verification that collects lots of personal information".
Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)
Which I think is a good thing, and should be strengthened further.
(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)
> If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.
There is a problem with schemes like that.
The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.
The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.
Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.
/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.
Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".
Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.
There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.
Except that you have to in this case because IDs are issued by the government and then it's the government having to provide some privacy-protecting means of using them, which is the thing they're incapable of in practice.
> There are ways that private entities can implement age checks both securely and without leaking much other information
I have yet to see a single one implemented in real life. People point to attempts and then you look at the implementation and it's full of dubious choices and unforced errors, before you even start looking for bugs.
Moreover, private entities have the perverse incentive to do the opposite of implementing it securely, because they find it profitable to track people, or find it unprofitable to spend the resources necessary to prevent themselves from being infiltrated by foreign governments when their business is the sort which is useful to them as these are.
> it's the government having to provide some privacy-protecting means of using them
Nope, not necessarily.
> I have yet to see a single one implemented in real life.
There are likely to be a lot more coming as the newer standards in this area were finalised last year. Online identity is a continually evolving space.
> Moreover, private entities have the perverse incentive to do the opposite of implementing it securely, because they find it profitable to track people
Some do in some circumstances, but far from all. Others (often financial institutions) have wised up to PII being a liability rather than an opportunity and some are working on frameworks and capabilites in this space that don't involve any more storage or transfer of anyone's ID than already happens in banks.
Necessarily, in fact, for any system that uses a government ID, because that requires there to be some interface between the government ID and a private bureaucracy that the holder of the ID would be pressured into interacting with. If that interface allows the private party to e.g. learn who you are, instead of just your age, it's only the government that could replace it with one that didn't.
> There are likely to be a lot more coming as the newer standards in this area were finalised last year. Online identity is a continually evolving space.
Evolution is supposed to cause bad ideas to die. The problem with laws, such as the ones surrounding government identity documents, is that they regularly require bad ideas to live. Which is why the use of government ID should be minimized.
> Some do in some circumstances, but far from all.
They all have that incentive, because it leads to money, and money is an incentive.
It's possible to turn someone down who is offering you money, but we're dealing with large scale systems here, and then the incentives determine the averages.
> Others (often financial institutions) have wised up to PII being a liability rather than an opportunity and some are working on frameworks and capabilites in this space that don't involve any more storage or transfer of anyone's ID than already happens in banks.
We really need to get it to stop happening in banks. The fact that every single thing you buy using a digital payment method is tied to your government ID is a preposterously dangerous status quo to leave unchallenged.
Also a password could be the passkey, the passkey protocol is basically a way to send to a server an authenticated public key. The client could deterministically convert passwords to key-pairs and authenticate with those
I am pretty sure that the issue is that they either admit to being so l stuck as a vassal beholden to Google, or they pretend to be enterprising and forward looking with many promising projects
Without a lot of discipline it is very easy to end up with a css with lots of unclear and hard to guess effects. Eg consider the case of <A type=1><B><A type=2></A></B></A> where A and B are complex templates. Any selector with the " " operator on A risk expanding to the inner A even if it was intended only for the outer. Similarly a :has selector might catch a descendant of the wrong element.
@scope fixes a lot of this, but it is a complex problem. With tailwind you mostly have to worry about inheritance
70 years of bombing random coutries an causing caos and military dictatorships show that is might not apply inward.
reply