Select date

May 2024
Mon Tue Wed Thu Fri Sat Sun

Unintended Consequences Of EU's New Internet Privacy Rules: Facebook Won't Use AI To Catch Suicidal Users

31-1-2018 < Blacklisted News 61 592 words
 

Descrier
Image Source: Descrier (flickr CC BY 2.0)


We've written a few times about the GDPR -- the EU's General Data Protection Regulation -- which was approved two years ago and is set to go into force on May 25th of this year. There are many things in there that are good to see -- in large part improving transparency around what some companies do with all your data, and giving end users some more control over that data. Indeed, we're curious to see how the inevitable lawsuits play out and if it will lead companies to be more considerate in how they handle data.



However, we've also noted, repeatedly, our concerns about the wider impact of the GDPR, which appears to go way too far in some areas, in which decisions were made that may have made sense in a vacuum, but where they could have massive unintended consequences. We've already discussed how the GDPR's codification of the "Right to be Forgotten" is likely to lead to mass censorship in the EU (and possibly around the globe). That fear remains.


But, it's also becoming clear that some potentially useful innovation may not be able to work under the GDPR. A recent NY Times article that details how various big tech companies are preparingfor the GDPR has a throwaway paragraph in the middle that highlights an example of this potential overreach. Specifically, Facebook is using AI to try to catch on if someone is planning to harm themselves... but it won't launch that feature in the EU out of a fear that it would breach the GDPR as it pertains to "medical" information. Really.



Last November, for instance, the company unveiled a program that uses artificial intelligence to monitor Facebook users for signs of self-harm. But it did not open the program to users in Europe, where the company would have had to ask people for permission to access sensitive health data, including about their mental state.



Now... you can argue that this is actually a good thing. Maybe we don't want a company like Facebook delving into our mental states. You can probably make a strong case for that. But... there's also something to the idea of preventing someone who may harm or kill themselves from doing so. And that's something that feels like it was not considered much by the drafters of the GDPR. How do you balance these kinds of questions, where there are certain innovations that most people probably want, and which could be incredibly helpful (indeed, potentially saving lives), but don't fit with how the GDPR is designed to "protect" data privacy. Is data protection in this context more important than the life of someone who is suicidal? These are not easy calls, but it's not clear at all that the drafters of the GDPR even took these tradeoff questions into consideration -- and that should worry those of us who are excited about potential innovations to improve our lives, and who worry about what may never see the light of day because of these rules.


That's not to say that companies should be free to do whatever they want. There are, obviously LOTS of reasons to be concerned and worried about just how much data some large companies are collecting on everyone. But it frequently feels like people are acting as if any data collection is bad, and thus needs to be blocked or stopped, without taking the time to recognize just what kind of innovations we may lose.


Print