Select date

May 2024
Mon Tue Wed Thu Fri Sat Sun

Flip Side To 'Stopping' Terrorist Content Online: Facebook Is Deleting Evidence Of War Crimes

20-5-2019 < Blacklisted News 26 1154 words
 

Just last week, we talked about the new Christchurch Call, and how a bunch of governments and social media companies have made some vague agreements to try to limit and take down "extremist" content. As we pointed out last week, however, there appeared to be little to no exploration by those involved in how such a program might backfire and hide content that is otherwise important.


We've been making this point for many, many years, but every time people freak out about "terrorist content" on social media sites and demand that it gets deleted, what really ends up happening is that evidence of war crimes gets deleted as well. This is not an "accident" or such systems misapplied, this is the simple fact that terrorist propaganda often is important evidence of war crimes. It's things like this that make the idea of the EU's upcoming Terrorist Content Regulation so destructive. You can't demand that terrorist propaganda get taken down without also removing important historical evidence.


It appears that more and more people are finally starting to come to grips with this. The Atlantic recently had an article bemoaning the fact that tech companies are deleting evidence of war crimes, highlighting how such videos have actually been really useful in tracking down terrorists, so long as people can watch them before they get deleted.



In July 2017, a video capturing the execution of 18 people appeared on Facebook. The clip opened with a half-dozen armed men presiding over several rows of detainees. Dressed in bright-orange jumpsuits and black hoods, the captives knelt in the gravel, hands tied behind their back. They never saw what was coming. The gunmen raised their weapons and fired, and the first row of victims crumpled to the earth. The executioners repeated this act four times, following the orders of a confident young man dressed in a black cap and camouflage trousers. If you slowed the video down frame by frame, you could see that his black T-shirt bore the logo of the Al-Saiqa Brigade, an elite unit of the Libyan National Army. That was clue No. 1: This happened in Libya.


Facebook took down the bloody video, whose source has yet to be conclusively determined, shortly after it surfaced. But it existed online long enough for copies to spread to other social-networking sites. Independently, human-rights activists, prosecutors, and other internet users in multiple countries scoured the clip for clues and soon established that the killings had occurred on the outskirts of Benghazi. The ringleader, these investigators concluded, was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander. Within a month, the International Criminal Court had charged Werfalli with the murder of 33 people in seven separate incidents—from June 2016 to the July 2017 killings that landed on Facebook. In the ICC arrest warrant, prosecutors relied heavily on digital evidence collected from social-media sites.



The article notes, accurately, that this whole situation is kind of a mess. Governments (and some others in the media and elsewhere) are out there screaming about "terrorist content" online, but pushing companies to take it all down is having the secondary impact of both deleting that evidence from existence and making it that much more difficult to find those terrorists.n And when people raise this concern, they're mostly being ignored:



These concerns are being drowned out by a counterargument, this one from governments, that tech companies should clamp down harder. Authoritarian countries routinely impose social-media blackouts during national crises, as Sri Lanka did after the Easter-morning terror bombings and as Venezuela did during the May 1 uprising. But politicians in healthy democracies are pressing social networks for round-the-clock controls in an effort to protect impressionable minds from violent content that could radicalize them. If these platforms fail to comply, they could face hefty fines and even jail time for their executives.



As the article notes, the companies rush to appease governments demanding such content get taken down has already made the job of those open source researchers much more difficult, and actually helped to hide more terrorists:



Khatib, at the Syrian Archive, said the rise of machine-learning algorithms has made his job far more difficult in recent months. But the push for more filters continues. (As a Brussels-based digital-rights lobbyist in a separate conversation deadpanned, “Filters are the new black, essentially.”) The EU’s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will see to it that nobody sees it. He fears the unintended consequences of such a law—that in cracking down on content that’s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, “would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.”



Of course, this is no surprise. We see this in lots of contexts. For example, the focus on going after platforms for sex trafficking with FOSTA stopped the ability of police to help find actual traffickers and victims by hiding that material from view. Indeed, just this week, a guy was sentenced for sex trafficking a teenager, and the way he was found was via Backpage.


This is really the larger point we've been trying to make for the better part of two decades. Focusing on putting liability and control on the intermediary may seem like the "easiest" solution to the fact that there is "bad" content online, but it creates all sorts of downstream effects that we might not like at all. It's reasonable to say that we don't want terrorists to be able to easily recruit new individuals to their cause, but if that makes it harder to stop actual terrorism, shouldn't we be analyzing the trade-offs there? To date, that almost never happens. Instead, we get the form of a moral panic: this content is bad, therefore we need to stop this content, and the only way to do that is to make the platforms liable for it. That assumes -- often incorrectly -- a few different things, including the idea that magically disappearing the content makes the activity behind it go away. Instead, as this article notes, it often does the opposite and makes it more difficult for officials and law enforcement to track down those actually responsible.


It really is a question of whether or not we want to be able to address the underlying problem (those actually doing bad stuff) or sweep it under the rug by deleting it and pretending it doesn't happen. All of the efforts to put the liability on intermediaries really turns into an effort to sweep the bad stuff under the rug, to look the other way and pretend if we can't find it on a major platform, that it's not really happening.


Print