Rushing to Judgment: Examining Government Mandated Content Moderation

Read the original article: Rushing to Judgment: Examining Government Mandated Content Moderation


On March 15, 2019, Brenton Tarrant logged on to 8chan and posted a message on a far-right thread to spread the word that he would be livestreaming an attack on “invaders.” Around 20 minutes later, Tarrant entered a mosque in Christchurch, New Zealand, with an automatic weapon and a GoPro camera. Tarrant livestreamed on Facebook as he embarked on a killing spree resulting in the murder of 51 persons. Facebook removed the livestream 17 minutes later, after it had been viewed by more than 4,000 people. In the next 24 hours, Facebook removed the video 1.5 million times, of which 1.2 million were blocked at upload. Tarrant’s preparation and announcement made it clear that the attack’s horrific shock value was tailor-made for social media. 

In May 2019, heads of government from New Zealand, France, Germany, the United Kingdom and several other nations released the Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online. Large online platforms such as Facebook, Twitter and YouTube, which supported the call, committed to taking specific measures for the “immediate and permanent removal” of violent extremist content. The call was in line with legally binding legislation aimed at hate speech and terrorism introduced around the world in the past few years, such as Germany’s Network Enforcement Act (NetzDG), France’s Avia law and, most recently, the EU’s proposal on preventing the dissemination of terrorist content online.

Twitter and Facebook responded to the Jan. 6 attack on the U.S. Capitol by purging President Trump’s social media accounts and QAnon conspiracy content based on their own terms of service (Facebook’s suspension of Trump will be reviewed by its Oversight Board). But the use of social media to spread dangerous disinformation and incite American citizens to attack the very seat of their own democracy has also led to calls for further legislation to ensure the swift removal of harmful content in democracies around the world.

Given the very real harms facilitated by online extremism, the urge to clamp down on social media through laws—rather than relying on the voluntary, inconsistent and opaque terms of service and content moderation policies of private platforms—is understandable. However, when democracies respond to threats and emergencies, there is a real risk of overreach that jeopardizes basic freedoms—not least freedom of expression. For instance, Germany’s NetzDG has been “cloned” by a cabal of authoritarian states including Turkey, Russia and Venezuela. These states cynically abuse Germany’s good-faith effort at countering hate speech and use it to legitimize crackdowns on political dissent. Russian dissident Alexey Navalny criticized Twitter’s suspension of Trump as having the potential to “be exploited by the enemies of freedom of speech around the world.” But governmentmandated notice and takedown regimes with very short deadlines may also result in detrimental outcomes for free speech within democracies.

Determining the lawfulness of content is a complex exercise that rests on careful, context-specific analysis. Under Article 19 of the U.N.’s International Covenant on Civil and Political Rights (ICCPR), restrictions of freedom of expression must comply with strict requirements of legality, proportionality, necessity and legitimacy. These requirements make the individual assessment of content difficult to reconcile with legally sanctioned obligations to process complaints in a matter of hours or days.

In June 2020, France’s Constitutional Council addressed similar concerns, when it declared unconstitutional several provisions of the Avia law that required the removal of unlawful content (including terrorism and hate speech) within one to 24 hours. Among other things, the council held that the platforms’ obligation to remove unlawful content “is not subject to prior judicial intervention, nor is it subject to any other condition. It is therefore up to the operator to examine all the content reported to it, however much content there may be, in order to avoid the risk of incurring penalties under criminal law.”

The council also stressed that it was:

up to the operator to examine the reported content in the light of all these offences, even though the constituent elements of some of them may present a legal technicality or, in the case of press offences, in particular, may require assessment in the light of the context in which the content at issue was formulated or disseminated.

In relation to the 24-hour takedown limit, the council concluded that “given the difficulties involved in establishing that the reported content is manifestly unlawful in nature … and the risk of numerous notifications that may turn out to be unfounded, such a time limit is extremely short.” In sum, the council found that the Avia law restricted the exercise of freedom of expression in a manner that was not necessary, appropriate and proportionate.

[…]


Read the original article: Rushing to Judgment: Examining Government Mandated Content Moderation

Liked it? Take a second to support IT Security News on Patreon!
Become a patron at Patreon!