What Do the Facebook Oversight Board’s First Decisions Actually Say?

Read the original article: What Do the Facebook Oversight Board’s First Decisions Actually Say?


The FOB has spoken. The Facebook Oversight Board (FOB)—a nascent court-like review board—has unveiled its first-ever set of decisions. The box score doesn’t look pretty for Facebook: Four of the five verdicts overturn moderation decisions made by Facebook. 

But the four-one split isn’t as interesting as the substance of the decisions themselves. These contain the first building blocks of the FOB’s jurisprudence, and some also spell out non-binding policy recommendations for Facebook. The five verdicts read like short legal opinions, though they blissfully dispense with the footnotes and bluebooking. They pull from different sources of “law.” They try to carve out the bounds of the relevant set of rules. They use the word “particularized.” And they give way to a host of important normative questions, some of which Evelyn Douek has already tackled in Lawfare. But there’s also the more basic question: What do the opinions actually say?

The Oversight Board turned in the bulk of its first batch of work on time. It announced its docket on Dec. 3, 2020, and pinged out its final decisions 56 days later, 34 days before its 90-day limit for rendering opinions. I wrote in detail about the first docket earlier this month, but it’s worth recalling some of the specifics. The board initially picked six different cases. Five were user appeals, and one was a referral from Facebook. Things shuffled up a bit after one of the six cases got mooted because a user voluntarily took down the post to which the content in question was attached. The FOB replaced the mooted case with a new referral from Facebook, and the panel hasn’t yet wrapped up its deliberations on that replacement case. Of the five cases the FOB ruled on this week, four concern content posted on Facebook, while one comes from Instagram. The board punted on tackling any major U.S. political controversies with its first six picks. Instead, it takes a tour of global scandals. There’s China’s treatment of Uighur Muslims, the Syrian refugee crisis, historical Armenian churches, nipples, Joseph Goebbels, and, of course, hydroxychloroquine. 

Case 1: Anti-Muslim Hate Speech From Myanmar

The board’s first opinion deals with a Facebook image post of a dead child “lying fully clothed on a beach at water’s edge.” The Oversight Board’s announcement of the case didn’t say much else to describe the post, noting only that “The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons.” Per the docket announcement, the post ran afoul of Facebook’s hate speech rules and so it got taken down. The Oversight Board reversed Facebook’s decision, noting that “[w]hile the post might be considered offensive, it did not reach the level of hate speech.”

The opinion helps to shed some light on exactly what was actually going on in the post. The Oversight Board’s announcement noted that the post was in Burmese, and the decision clarifies that the user is from Myanmar. And it turns out the post wasn’t a status update but a post in a Facebook group—and not just any Facebook group, but a group “which describes itself as a forum for intellectual discussion.” The opinion also clarifies that the post included two images, both of which are “two widely shared pictures of a Syrian toddler of Kurdish ethnicity who drowned attempting to reach Europe in September 2015.” Most importantly, the FOB’s opinion gives a clearer picture of what the post actually said: “the accompanying text begins by stating that there is something wrong with Muslims (or Muslim men) psychologically or with their mindset” and “concludes that recent events in France reduce the user’s sympathies for the depicted child, and seems to imply the child may have grown up to be an extremist.” It’s a post with two images of a deceased Kurdish child, and the user suggests that Muslims tend to become terrorists, citing as evidence two terror attacks in France. 

The original docket announcement mentioned that Facebook took the post down for violating the platform’s “hate speech” rules, but the opinion itself clarifies what Facebook had in mind. It wasn’t the jarring images that took the post over the line, it turns out, but instead the phrase that there is “something wrong with Muslims psychologically.” The opinion explains that Facebook’s Community Standards prohibit “generalized statements of inferiority about the mental deficiencies of a group on the basis of their religion,” so Facebook nixed the post and removed it from the “intellectual discussion” forum. 

The opinion also spells out the nature of the user’s appeal. The appellant used the most tried and true of online harassment defenses: I was just kidding! Can’t you take a joke? The FOB reports that “[t]he user explained that their post was sarcastic and meant to compare extremist religious responses in different countries.” The user also took aim at Facebook’s translation abilities, claiming that “Facebook is not able to distinguish between sarcasm and serious discussion in the Burmese language and context.”

The FOB’s analysis here, as in all of the five cases, leans on three different sources of “law:” Facebook’s Community Standards, Facebook’s “Values,” and international humanitarian standards. By each pathway, the Oversight Board arrives at the same conclusion.

On the first prong, the FOB concludes that the takedown represented an overreach by Facebook’s Community Standards police. The user’s critique of Facebook’s Burmese language facility wasn’t far off, the Oversight Board concludes: Facebook translated the text as “[i]t’s indeed something’s wrong with Muslims psychologically,” but the Board’s own translators read it as “[t]hose male Muslims have something wrong in their mindset” and the board “suggested that the terms used were not derogatory or violent.” The translation discrepancy flows in part from the FOB’s decision to look to broader linguistic context. There’s a lot of anti-Muslim hate speech in Myanmar, but “statements referring to Muslims as mentally unwell or psychologically unstable are not a strong part of this rhetoric.” This means that the post isn’t covered by the part of the Hate Speech community standard that bans “attacks about ‘[m]ental health.” The Oversight Board instead arrives at an alternative interpretation of the post, one that makes it compliant with the Hate Speech rules: “the Board believes that the text is better understood as a commentary on the apparent inconsistency between Muslims’ reactions to events in France and in China.” As such, it is opinion and can stay up, whereas generalized attacks on the “mental” deficiencies of a group cannot. 

The FOB offers a more cursory analysis of the post’s relationship to Facebook’s values. The “Values,” spell out a balancing test of sorts: “‘Voice’ is Facebook’s paramount value, but the platform may limit ‘Voice’ in service of ​​several other values​, including ‘Safety.’” Safety, according to the Values Update, means that “[e]xpression that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed on Facebook.” It’s the same basic idea as in many certain domestic free speech regimes: The government, in this case Facebook, can curtail free speech in the interest of preventing certain harms. The FOB concludes that Facebook improperly biased the “Safety” consideration here: “this content did not pose a risk to ‘Safety’ that would justify displacing ‘Voice.’” 

The opinion further concludes that no humanitarian obligation exists that mandates taking the post down. It doesn’t violate the “advocacy of religions hatred constituting incitement to discrimination, hostility or violence,” under the International Covennant on Civil and Political Rights (ICCPR); and “the Board does not consider its removal necessary to protect the rights of others.” Instead, the FOB gestures at some humanitarian considerations that would weigh in favor of keeping the post up. Article 19 of the ICCPR spells out a “right to seek and receive information, including controversial and deeply offensive information;” the UN Special Rapporteur on Freedom of Expression has recently affirmed that “international human rights law ‘protects the rights to offend and mock;” and there’s a value in keeping up “commentary on the situation of Uyghur Muslims” because such information “may be suppressed or under-reported in countries with close ties to China.” 

So the post goes back up. 

Case 2: Historical Armenian Churches and an Anti-Azerbeijani Slur

The second case concerns a Facebook post that Facebook took down because it contained a slur directed at Azerbaijanis. Here, the FOB okays Facebook’s original judgment and let the ban stand. The November 2020 post included images of “historical photos described as showing churches in Baku, Azerbaijan” and text in Russian that claimed “Armenians had built Baku and that this heritage, including the churches has been destroyed.” One problem: The user deployed the epithet “‘тазики’ (‘taziks’) to describe Azerbaijanis, who the user claimed are nomads and have no history comparable to that of Armenians. 

The opinion explains that Facebook removed the post because it violated the platform’s hate speech rules. Specifically, Facebook doesn’t allow slurs “to describe a group of people based on a protected characteristic,” which in this case is national origin. The term literally means “wash bowl,” but the FOB explains that “it can also be understood as wordplay on the Russian word ‘азики’ (‘aziks’), a derogatory term for Azerbaijanis.” The latter meaning features on “Facebook’

[…]


Read the original article: What Do the Facebook Oversight Board’s First Decisions Actually Say?