> According to the DPA ruling, although the school secured parents' consent to monitor the students, the regulator did not feel that it was a legally adequate reason to collect such sensitive personal data.
It’s this kind of second-guessing which is extremely concerning about GDPR-type regulation.
Not that the information was not secure, or leaked, or mishandled, or consent wasn’t obtained, but even if all that is done, just, “We don’t think you had a good enough reason.”
Actually I think it's a perfectly appropriate use of facial recognition in this case.
Firstly, the school district already has a picture of every child at the school. Second, they already track the child (manually) at all times throughout the day.
So, they are not collecting any more information than they already had, nor are they collecting any more information than is legally required (attendance tracking is likely to be legally mandated)
The end result is the same exact data exposure / privacy risk that they started with, and more classroom hours to actually teach instead of taking attendance.
The classroom setting is a particularly good example of the narrow use case when facial recognition is acceptable. Unlike, for example, in a retail establishment, where a casual customer who browses the store but does not buy anything would otherwise remain anonymous.
Quite the contrary, you do not want anyone in the classroom who is anonymous.
Speaking purely on the technical side of this problem: if you had to design a system to take attendance without some sort of human in the loop (e.g. a teacher), what other workable solutions are there? ID scanned by itself? A student could scan multiple IDs for their friends. ID + PIN? Due to the bursty nature of entering classrooms, that would cause a jam up at the PIN reader. Short of an ID shackle (that's a joke), nothing really comes to my mind that is foolproof.
I've worked as a teacher before. I'm a bit confused as to the 17,000 hours time claimed to collect attendance information (and hence the justification that you need to automate it). Where is that number coming from?
As part of my job, I had to be able to recognise and put a name to all of my students. This is legitimately difficult and I spent a lot of time doing it. However, it was not for doing attendance! If you are in a school system below university and your teacher doesn't recognise you or know your name... I mean... I can't even comprehend what a bad situation that is. How could you ever teach a student if you know so little about them that you can't even recognise them? When you are marking their paper and see their name, if you don't even know who they are, how can you give appropriate guidance? That kind of situation would be shocking. And to be fair... because it is really, really hard I know a lot of teachers who have no clue... but seriously, this is not what we aim for!
Once you know the students, attendance is easy. Why is that chair empty? Who is supposed to sit there? Look at your seating chart (seriously, it's the best way to learn their names...). Is that person anywhere else in the room? No? They are absent. Mark it on your list. That takes all of 2 seconds. If you are really doing your job, you can ask why the person is missing. Maybe you can have their friend take them some homework. A surveillance system can't do that.
I mean... Why would you ever want an automated system in this circumstance? It just makes no sense.
I think the reality of the situation is that they really wanted a surveillance system. They didn't want to record attendance, they wanted to know where the students were in every second of the day. They wanted to time home long they are in the toilet so that they can catch them smoking or dealing drugs. They wanted to see them leaving the school o that they can lie in wait and nab them as they cut class: students 2 and 5 are supposed to be in band class but they are headed in the opposite direction -- go all you zigs! For great justice!
I just can't imagine any other reason you'd want this. Perhaps I'm wrong. I certainly hope so!
The 17,000 number does seem incomprehensibly high unless it's across the whole district. I'm not a teacher but I'm guessing the attendance gets compiled together and stored? Maybe this starts from hardcopy and includes all the work until its in the final digitalized format?
>They didn't want to record attendance, they wanted to know where the students were in every second of the day. They wanted to time home long they are in the toilet so that they can catch them smoking or dealing drugs. They wanted to see them leaving the school o that they can lie in wait and nab them as they cut class: students 2 and 5 are supposed to be in band class but they are headed in the opposite direction -- go all you zigs! For great justice!
GDPR explicitly prevents this. Data is collected for a purpose and that purpose only. Collecting data for attendance then using it for catching drug dealers is forbidden.
If I had to design a system to track students without teacher's involvement, my first step would be to find the stakeholders and remind them that they are asking for a system that is worse, more expensive, and more dystopian than the simple system they already have. It's kind of a false question - like asking, "suppose that a boot was going to stamp on the face of humanity forever, what kind of leather would you make it out of?"
I'm not a proponent of facial recognition at all and would not endorse it especially in this use case. But, if you believe the 17,000 hours/year number, that's ~$500,000/year (assuming $30/hour, random guess) in potential savings. Are you saying there can't possibly be system that could be designed to take attendance while not further infringing on a student's privacy that's cheaper than that?
What is probably happening is that this is some sneaky way of introducing facial recognition in schools and something benign like attendance was the excuse. That, however, is a separate conversation from my original post of how would you automate attendance without using facial recognition.
When trying to design technical solutions, focusing purely on the technical side of the problem, without considering the other aspects, is what led to pervasive tracking and legislation against it in the first place.
The GDPR doesn't specify what you can't do with personal data, it specifies what you can do. Personal data is private by default; you can only use my data if it is explicitly lawful for you to do so. If you're not absolutely sure that what you're doing is in line with the GDPR, you shouldn't do it. That's by design, not by accident.
The GDPR's fundamental principles are set out in article 5. You should collect the least amount of data possible, you should process it only for specific, explicit and legitimate purposes and you should delete it as soon as possible once you've finished.
Can anyone legitimately argue that a facial recognition system is the least intrusive way of collecting attendance data? Would any reasonable person believe that constant video surveillance with facial recognition technology is no more intrusive than taking the register at the start of class? I think not.
The actions of the school authorities were in flagrant breach of the GDPR and they fully deserve to be fined. There's no second-guessing in this case, no grey area, just an organisation that used advanced surveillance technology to monitor children without giving a moment of thought to the privacy implications. This fine is precisely in line with the intentions of the European Parliament when they signed the GDPR into law.
That makes sense. Opt out for even just one individual would be effectively impossible, otherwise how would the system know who's opted out without remembering them in some way.
The question was how do you track whether people have opted in or not, by not having biometrics for people who have opted out. The solution is simple, if you do not recognise the person, they have not opted in.
According to the article, the experiment was constructed as opt in and several students have opted in.
The thing is, aren't all the efficiency gains wasted if even just a couple students opt out? You've still got to take a register, even if shorter. It can't be worth it for a few extra minutes.
I don't think the argument was that it was not in line with GDPR. The argument is that GDPR is wrong itself in not allowing people to consent to anything which goes beyond "minimal" data collection for the purpose needed, because whether it is "minimal" is, in some sense, a value judgment as to how much convenience and innovation matter. It seems to me that the principle should be personal "ownership" of personal data - and "ownership" of data should imply the owners have the right to alienate that data as the owner sees fit, with the right safeguards to ensure this isn't coerced or a result of fraud.
On the other hand, this type of scenario doesn't feel much like the consent given was really voluntary - (1) because noone wants to be the one person who stopped the school from doing something which everyone else was on board with and (2) because it involved adults consenting on behalf of children for something which the children may not be happy about in future when they are adults and it's too late.
The reason that the GDPR is so expansive is that 'voluntary' consent means nothing when your only alternatives to not 'voluntarily' consenting are moving to a different city or going completely without a given service. See how, for instance, literally every major internet provider in the US have contract clauses that amount to 'we can do whatever we want with your personal information forever, neener neener'.
This fine is precisely in line with the intentions of the European Parliament when they signed the GDPR into law.
That is certainly a statement that no one could argue with. I think the argument is whether those intentions are actually good, logical, or workable.
For example, the fine structure is explicitly designed so that they could take all of the assets of a small business and wipe it off the face of the map, but only take 5% of annual revenue of a large business. Does that reflect an intention to consolidate market power in the hands of a very few companies? That would be a bad intention in IMO.
>but only take 5% of annual revenue of a large business
It's 4% global revenue and it's really a lot. Fines are issued based on the amount of data and amount of people affected.
Small companies rarely need anything but contact information. Storing the data securely is no trivial task for any size of organization, though.
Small companies would have significantly less data and customers to be worthy of a hefty fine. Again, if you can't keep the data secure, don't go into such a business.
I was referring to the maximum fine. It is 4% of global revenues, or 20M EUR, whichever is greater. So the maximum fine would wipe out any business that doesn’t happen to have 20M EUR lying around - that might be 1000% of what they have, but the damage to a large business is capped at 4% of their revenue regardless of how egregious the behavior. That kind of a fine structure has only one possible purpose: it was specifically designed to be able to decimate small businesses while being a minor speed bump for any large business. The ultimate effect is the consolidation of power into the hands of handful of large companies.
Do you have any evidence that all parties provided informed consent? i.e. The consent form detailed out what could happen in the (somewhat likely) event the data is breached and how the data may be used? What other parties have access to the information and how that information is stored? How the information is transmitted? And they understood it all, with the attached implications?
Considering the only words the person who oversees both high schools had to say on record was that the technology is "fairly safe", I am somewhat doubting that the parents had a fully formed view of what the potential downfalls of the program are if/when it is breached or misused, or even malfunctions.
I think when we are dealing with advanced tech such as facial recognition it is analogous to the informed consent issues doctors face. How can someone possibly obtain informed consent from non-experts over something that requires years of schooling to understand?
It was just about using facial recognition in a school, not about having brain implants in children...
Schools already have pictures and detailed personal details of all pupils.
This new technology can have real benefits. For example it could be used to locate a pupil that does not show up in class when they should, or to trigger an alert if a pupil leaves school when not expected to or when someone unknown is detected.
To me the law should protect but not prevent useful applications that people agree to, and I think that this specific case shows that, as they stand, the GDPR can be over-restrictive.
"Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data."
Recital 43:
"In order to ensure that consent is freely given, consent should not provide a valid legal ground for the processing of personal data in a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority and it is therefore unlikely that consent was freely given in all the circumstances of that specific situation."
A school cannot rely on consent as a lawful basis for processing the personal data of pupils.
There were fined so it is established that they broke the law.
I am commenting on the spirit of the law itself. You're quoting it back to me but that's not the point. The point is to discuss what we think of those legal restrictions and whether there are restrictive. In fact, what you quoted reinforces my opinion that the GDPR are over-restrictive.
Does Google's et all EULA cater for this as fairly sure many childrens phones track them and fall foul of this.
Oyster cards in London, your journeys are tracked, consent not asked for. Let alone giving special privilege for children.
Then the whole aspect of under-age (children) commiting crime and evidence. A smart lawyer could abuse the whole aspect to squash any evidence that placed them at a scene of a crime as they never gave consent and if they did - they didn't know what they were doing.
Basically - if a school can't rely upon consent from children - nobody can.
>Does Google's et all EULA cater for this as fairly sure many childrens phones track them and fall foul of this.
Google are subject to multiple GDPR investigations as we speak. They have already received a number of large fines.
>Oyster cards in London, your journeys are tracked, consent not asked for. Let alone giving special privilege for children.
TFL have a legitimate need to know where you tapped in and out in order to calculate fares. As long as they aren't using that data in an identifiable form for other purposes and they delete it as soon as practicable, they're compliant. A facial recognition system collects far more data than is minimally necessary to track school attendance, which is contrary to the principles set out in Art. 5.
>Then the whole aspect of under-age (children) commiting crime and evidence. A smart lawyer could abuse the whole aspect to squash any evidence that placed them at a scene of a crime as they never gave consent and if they did - they didn't know what they were doing.
That evidence is necessary for the purposes of mounting a prosecution so processing it (in accordance with the rest of the GDPR) is lawful under Art. 6. Consent is only one lawful basis for processing personal data; it is not always necessary, nor is it always sufficient. Consent does not give anyone carte blanche to do as they please under GDPR, particularly where that consent might not be fully informed or freely given.
Basically - if a school can't rely upon consent from children - nobody can.
I think that is a fairly well established legal principle. In many places, a contract cannot be enforced against a minor no matter whether it was freely entered. Statutory rape laws say it doesn't matter if the minor consented. And so on . . .
"Statutory rape laws say it doesn't matter if the minor consented" that is probably an extreme example that some might find unpalatable, but it does support the point.
In the more general case, I was under the impression that informed consent was sufficient to authorize a data controller to collect/process private information and so the ruling didn't make sense to me. I'm using "informed consent" here as a short hand for all the applicable GDPR requirements on consent (reasonable language, etc).
It isn't clear to me from this language in Recital 43 though how a data controller with an "imbalance" relative to the data subject could easily get clarity on any particular use case. It also seems strange that in this case there was deemed an imbalance between the schools and the parents (I'm assuming here that parents are indeed authorized to give consent in their role as parent/guardian). If parents are in an imbalanced situation regarding school attendance, then pretty much all government relationships are imbalanced.
If the school/parent relationship is considered imbalanced and the imbalance language isn't specific to a government data controller, then it would appear that every data controller (government entity or not) is in danger of having their relationship deemed "imbalanced" and the data collection subject to analysis by the data authority at any time.
It seems like this ruling destroys the clarity of "consent" and replaces it with "(consent AND balanced relationship) OR (imbalanced relationship AND legally adequate reason AND prior approval from regulator AND consent)"
>It seems like this ruling destroys the clarity of "consent" and replaces it with "(consent AND balanced relationship) OR (imbalanced relationship AND legally adequate reason AND prior approval from regulator AND consent)"
Consent is not always necessary, nor is it always sufficient. If you rely on consent as a lawful basis for data processing, the burden of proof lies with you to demonstrate that such consent was informed and freely given. The authors of the GDPR were fully aware of the fact that coerced consent was rampant, with stuff like shrinkwrap agreements, incomprehensible terms and conditions and "by entering these premises, you consent to give us your first-born son"; as a result, consent is very tightly regulated under GDPR. As a rule of thumb, ask whether a data subject could a) refuse consent without any repercussions and b) would not be surprised by any aspect of your processing; if you aren't certain that both a & b are true, you probably can't rely on consent.
The school already had a means of collecting attendance data that didn't involve constant video surveillance and had a far lower risk of misuse and security breaches. They didn't need consent to take the register, because it was justified under Art. 6 (1c, d and e). They relied on consent as a lawful basis for the facial recognition scheme, even in a situation where it would be difficult for the data subjects to refuse consent and where the data subjects would be unlikely to understand the full extent of the data processing and the risks that they would be exposed to as a result. Using consent in that way is very much contrary to the spirit of the regulations.
>If parents are in an imbalanced situation regarding school attendance, then pretty much all government relationships are imbalanced.
Why were they in an imbalanced situation? In this particular case, I believe it was a trial. So parents could have withheld consent with no negative consequences as far as I can tell.
I realize that this particular situation is about facial recognition, but I was trying to point out that this ruling changes the game for everything, basically creating a situation where the only way for a data controller to minimize legal risk would be to get prior authorization from the data authority. That is a problematic.
A school performing face recognition is a joke when it comes to privacy.
Coming from billions dollar business, storing document copies, required by law for certain type for contracts, securely is a HUGE burden and liability (aside the permission management, etc.) alongside the necessary audits - both internal and external.
Having footage and training data on kids would be a resounding 'no' from me (both as a parent and engineer). Schools would never, ever have enough resources do to it properly.
In this particular instance they need one bit of info: "present" and that should be it all.
The fine is well justified, considering someone won a contract to install the system without any oversight from the school.
Facial recognition can raise privacy concerns but it does not necessarily erode privacy.
Schools have a duty to provide a safe environment, including keeping track of who's on school premises, where pupils are, and whether pupils are let to leave premises when they are not expected to. Obviously the younger the pupils the stronger those duties.
For example, facial recognition could tell the system that you are in the library but doing so would not erode your privacy in the library at all.
well, until the technology is trusted "enough", there should be audit logs - pretty much destroying privacy.
Again, with the current technology and literacy level, I'd not trust a school to be able to afford to employ proper security policies. It's already hard for businesses that actually spend substantial effort and resources.
Everybody was informed and OK.... Except for the students they tracked, nobody asked them.
Sure, they are still minors, and it's up to the parents to decide what goes and what not. But by law the parents in Sweden (and a lot of other places around the world) have an obligation to act in the best interest of their children to their best ability.
I'd argue exposing their children to mass surveillance in school and face recognition was not in the best interest of the child. I'd go further and argue that the parents violated the GDPR themselves, by mishandling or allowing a third party to mishandle with parental consent the data of their kids. This was at least grossly negligent. If I was making the decisions about fines, I'd have fine the parents a minor fine, too, just enough to make them (and other parents reading about it) think about such things next time somebody comes around with an indecent proposal to collect their kids data.
Also, we place, in my opinion rightfully so, certain restrictions on individual choice and decisions. For instance, you cannot kill your neighbor for your pleasure even if they gave you informed consent. These restrictions are of course not set in stone, as e.g. the assisted suicide discussions show, or gay marriage.
There are certain areas that seem benign at first, like e.g. health care providers getting you to wear "health" monitoring devices for a reduction in premiums... until the reductions become so severe that you cannot really opt out anymore, if any health care provider will even take you after you opt out. Or "voluntary" drug tests to get a job, or when already employed. Everybody knows these dejure voluntary tests are defacto mandatory.
I'd argue that mass surveillance and even biometric processing in schools falls in this category too; looks somewhat benign in the beginning, until it isn't.
This is turning into a religious war, with facial recognition being branded absolutely evil.
As I already wrote, I think that being able to locate pupils at all times is a legitimate concern for schools, and parents.
Certainly schools already work hard to achieve this especially with younger children.
I think facial recognition can bring real benefits without many drawbacks if used properly.
An effective blanked ban is not productive or based on reason.
> As I already wrote, I think that being able to locate pupils at all times is a legitimate concern for schools, and parents.
No, no. It's a want. Of course parents want to know where their children are 100% of the time, of course schools want to know where the students are 100% of the time, but children are people too.
Children have the right to lie and keep secrets from their parents. They have the right to privacy, they have the right not to be under automatic surveillance all the time, because they're not fucking prisoners or slaves.
Schools do have a duty of care and safety. They already do make a lot of efforts to know where their pupils are at all times, especially for younger children (obviously).
This is completely different from preventing children from having secrets and from affording them privacy.
I wish we could have a mature discussion on this and avoid hysterical terms and comparisons.
>As I already wrote, I think that being able to locate pupils at all times is a legitimate concern for schools, and parents.
Depends on the age, really. It goes from all the time for really young children, to "locate in a reasonable amount of time, if needed". How long that is depends on the comfort zone of the parents of course, but has to account for allowing children to develop, learn by themselves and grow, or else it's wrongful imprisonment to be honest.
By the way, the "being able to locate all the time" wasn't even the goal of the school. It was to take attendance.
Choosing between European internet laws and having a tech sector, I'd pick having a tech sector any day. It's all totally unnecessary strangling of innovation. The entire idea that you own data pertaining to you is absurd.
People like to act like the GDPR is this complicated and draconian law, but it is not. At its simplest, it says, don't collect data with out a really good reason, and when you do collect it, handle it safely and properly, including deleting it as soon as it is no longer necessary. I would have thought the HN community would generally be in favour of that.
No, you can't. Since GDPR was rolled out, venture capital investments in the EU have dropped by a third.[1] According to that paper, the companies most hurt are early stage startups.
That presumes all early stage startups (as made in the US) are created equal, and any regression in their success rates is bad.
Given that 9/10 startups fail even without GDPR, it's not surprising that early stage cos form the Lion's share of failures, and it surely can't be good for any data that was slurped up during whatever these experiments at market fit were doing.
And given that the ultimate goal of GDPR is to protect privacy, it doesn't make sense to exempt startups, especially when the early stage stakes are high and a failure to squeeze out every drop of value legally possible out of your data (while your competitors do) could mean the death of your venture.
As such, comparing to American standards or even current European standards doesn't quite work when there's a clear shift of the moral bar for GDPR compliance.
You misunderstand the point I'm making. After GDPR, investments in early stage startups dropped by almost 50% in the EU. If you cut investment in half, you're going to cause some startups to fail that otherwise would have succeeded. Startups are a "hits" enterprise. Most fail. Some are mildly successful. A few are responsible for almost all of the upside. Cutting investment by 50% means that instead of getting a Dropbox and a Spotify, you'll only get a Dropbox. That missing upside is bad for the economy.
I'm saying that you can have GDPR or you can have a thriving startup ecosystem. The data shows that you can't have both.
Personally, I think the GDPR is a colossal waste of time that only benefits incumbents. I've had to help implement GDPR compliance at a company and it did absolutely nothing to protect the privacy of customers. However it did cost several hundred thousand dollars.
I fully understand your point that GDPR hasn't been good for investment or startups, and that that the likelihood of a startup succeeding to reach sustainability in the european field has significantly diminished. What I'm rejecting is that that's necessarily an undesirable state of affairs. Does the world really need the likes of Blur or DiscountMugs to succeed, when they have proven woefully incapable of protecting the most basic forms of user data?
The tech economy seeing such upheaval right now could be construed as a signal demonstrating how dependent it was on fundamentally unhealthy and untenable data practices that were previously endemic to the industry.
I'm sorry that you have had to spend a tonne of money to attain GDPR compliance. I imagine most "incumbents" have had to spend a good deal as well; I can only hope that the next generation of companies have learnt from your company's mistakes and to structure their data processes from day 1 to avoid accruing such sensitive data in the first place.
At any rate - a tech sector is possible. A thriving one, that can sustain as much employment? Maybe not quite, there'd have to be some adjustments; but the people would be better off. A tech sector with the same market cap? Unlikely, but we need to get over ourselves and question if preserving the techocracy's wealth is more important here.
> What I'm rejecting is that that's necessarily an undesirable state of affairs. Does the world really need the likes of Blur or DiscountMugs to succeed, when they have proven woefully incapable of protecting the most basic forms of user data?
Again, I'm saying that the cost is borne by all startups, not just the ones you don't like. For startups, the main cost of GDPR isn't fines, it's less investment money and more compliance costs. That means good startups and bad startups alike must pay the price. They have equal chances of being killed in the cradle by these costs.
> I'm sorry that you have had to spend a tonne of money to attain GDPR compliance. I imagine most "incumbents" have had to spend a good deal as well; I can only hope that the next generation of companies have learnt from your company's mistakes and to structure their data processes from day 1 to avoid accruing such sensitive data in the first place.
The costs weren't high because of anything we were doing that was out of the ordinary. GDPR affects you if you even keep source IP addresses in your server logs. It mandates processes as well as restrictions. You have to train employees. You have to pay lawyers to ensure your processes are compliant. Even if everything you're doing is totally unobjectionable, the costs are significant. Current evidence indicates that many new companies are solving this problem by incorporating outside of the EU and avoiding doing business with EU customers until they're large enough to afford the costs of compliance.
> A tech sector with the same market cap? Unlikely, but we need to get over ourselves and question if preserving the techocracy's wealth is more important here.
You seem to have a zero sum view of wealth. Tech companies create wealth. They make things people want. Preventing companies from existing doesn't help others (except for incumbents with inferior products).
> Cutting investment by 50% means that instead of getting a Dropbox and a Spotify, you'll only get a Dropbox. That missing upside is bad for the economy.
No no, if investment money disappears from tech, that means you get Dropbox and something else that isn't a tech company. The investment money doesn't just disappear into thin air, it gets invested into something else, a different kind of company, a different sector, some place that isn't dependent on privacy-violating ad-tech bullshit and selling user behaviours to make money.
If you read the paper I linked to, it says that the money is likely going to the US and other places outside of the EU. Capital can cross borders effortlessly.
From that paper: "Of course, there are caveats to these findings. First of all, GDPR has only been in effect in the EU for a short time, and the effects we’ve observed may be temporary, with investors potentially taking a wait-and-see approach."
Given the year-over-year trends shown in [0] I feel like one should not overemphasize the timeframe that article looked at. The decline seems to be relatively contained at best and future development is unclear at best. And honestly, who cares, even if we suddenly got 20% of startup failures due to GDPR concerns alone. If those concerns are reasonable I'm totally fine with that.
It's not due to larger industry trends. The paper shows that startups in the US got more funding in the same time period, including early stage startups.
I remember warning about this when GDPR was being considered. People said it wasn't a concern. Later in 2018 I linked to a draft whitepaper that showed a decline in funding for EU startups. People replied that it was a statistical fluke and that we needed more time before we could draw conclusions. Now it's been a full year and the US-EU investment disparity is higher than ever.
At this point I don't know what would change people's minds. It's like talking to climate change denialists. I get the same response: "There's not enough data. The data doesn't support that conclusion. Even if it did, the trade-offs are worth it."
Okay, let's say there is enough data and you are absolutely correct. How does that invalidate the "Even if it did, the trade-offs are worth it." response?
The US-EU funding disparity was always there, in my book this leads to less people copying ethically dubious startup patterns from their US counterparts. If GDPR is the sole reason for the decline... doesn't the benefit of the population, of quite a few countries, outweigh the impact on a few startup folks and their investors?
The EU doesn't prevent the "bad" startups from existing. They just prevent them from starting in the EU, which means the EU misses out on a lot of the potential upside.
Companies like Microsoft, Google, Uber, Facebook, and PayPal started in the US, but they eventually expanded to the rest of the world. Since those companies weren't founded in the EU, EU workers missed out on being early employees. That means those company cultures are far more American than they otherwise would be. The EU missed out on having those companies headquartered in Europe where they'd create more jobs, more wealth, and more profits that could be taxed. Having them headquartered in the EU would also make those companies easier to influence.
It's hard to quantify exactly how much these things matter, but I think the long term cost is far higher than the consumer benefits of GDPR.
> The EU doesn't prevent the "bad" startups from existing. They just prevent them from starting in the EU, which means the EU misses out on a lot of the potential upside.
To me that sentence just isn't logical. Yes, they'd prevent some them from starting in the European market with their exact original business model if they were starting out now (and not like half of them in the last millenium). None of them started out here back in their time so that sounds more like a question of market penetration than benefit for the startup culture anyhow. With the GDPR they will hopefully also prevent said market penetration for bad actors when the time comes for the next generation of startups.
If GDPR is the sole reason and, subsequently, the US chooses to foster such businesses to win in a market, more power to them. Your examples might be huge successes, but many of them are at this time considered to be either unethical (Facebook, for the most part; Google had their share of negative attention) or plain illegal (Uber for example still hasn't managed to really start out in Germany, let's throw in AirBNB for good measure - I think they were founded in about the same year - huge financial hit that leads users to break laws and contracts left and right).
The calculation of long term cost and benefit is honestly the point where this discussion switches from economical concerns to political, at least for me since I'm not versed enough in macroeconomics to begin to judge that and would always place society over monetary concerns. I'm sure neither of us can win over the other so I'd suggest we leave it at that.
You are comparing server logs containing your IP address to human trafficking. That's a fun bit of rhetoric, but it doesn't help anyone in this conversation discover what's true.
You are turning his argument about things being forbidden by the GDPR being in majority eroding to privacy into something about IP addresses, it's not a fair simplification. That actually doesn't help anyone in this conversation to decide if the privacy gains provided by GDPR are worth it or not.
> although the school secured parents' consent to monitor the students, the regulator did not feel that it was a legally adequate reason to collect such sensitive personal data.
But this is the exact kind of situation GDPR was designed to prevent. One in which the school obviously used its position to coerce consent, because that's not consent anymore. It means companies and institutions can't abuse their power to get consent for unnecessarily broad data collection. The law is absolutely crystal clear about this and spinning it as "second-guessing is concerning" is absurd. Any less onerous requirements would easily be loopholed and technicalitied to death because that's what companies and institutions do whenever they see the opportunity. If you're upset with GDPR, blame the companies who aren't respecting our rights.
It’s this kind of second-guessing which is extremely concerning about GDPR-type regulation.
Not that the information was not secure, or leaked, or mishandled, or consent wasn’t obtained, but even if all that is done, just, “We don’t think you had a good enough reason.”