How Platforms Can Prevent Misinformation Like #dcblackout

Read the original article: How Platforms Can Prevent Misinformation Like #dcblackout


On June 1, citizens in the nation’s capital awoke to terrifying news after a night of protests. According to many Twitter users, late in the night, government security services had cut off communications and protestors had disappeared in the ensuing blackout. In the wake of federal law enforcement and National Guard troops deploying across the city to respond to protests over the death of George Floyd—scenes that included police using chemical agents against peaceful protestors in front of the White House and a helicopter hovering low over Washington, D.C., streets in a military maneuver—the alarming reports added to a sense of anxiety and dread.

There was just one problem: No blackout had taken place. The story wasn’t real.

The conspiracy theory was thoroughly debunked over the course of the day. It had spread on Twitter with the hashtag “#dcblackout”—paired with “#dcprotests,” which local protestors had been using to document the ongoing demonstrations against police brutality. This brought extensive attention to the #dcblackout claims, resulting in hundreds of thousands of tweets exposing millions of people to the false information. As early as 9 a.m. on June 1, it was difficult to discern that many of the original tweets and retweets were from suspicious and bot-like accounts.

The accounts struck me as suspicious. But when I shared my skepticism on Twitter, many users insisted that the blackout was real. People had bought the ruse, and I watched as retweets of the suspicious posts jumped into the tens of thousands. NPR quickly reported on the spread of false information, but #dcblackout still raged out of control, leading to around 500,000 tweets in its first nine hours.

The #dcblackout incident offers a warning for the months to come. The 2020 election looms over a highly partisan political environment with weakened institutions and a reduced number of journalists—all concerning factors for a truthful public discourse and healthy democracy. Years of investment by social media companies have taken down the networks of accounts used by organized disinformation campaigns in 2016, but propagandists can still hijack developing political conversations with ease.

Yet human psychology, which itself enables the spread of disinformation, can also be employed to defeat it. An emerging body of research suggests that asking people to be skeptical and exposing the strategies of manipulators can make them far more resilient to disinformation. To beat opportunistic disinformation, social media companies should harness the critical thinking of their users.

The #dcblackout Hashtag

The tweets that started #dcblackout did not come from a long-standing network of accounts. Early tweets came from new accounts with few followers and incomplete profiles. Many of the suspicious accounts pushed a screenshot of a #blackout tweet from a then-deleted account, sarahxo85267698. While the deletion of sarahxo85267698 may have been intended to disguise the dubious nature of that account, the supporting cast of bot-like accounts asserted that sarahxo was being suppressed by Twitter. They used their own cover-up as part of their censorship narrative.

The confusion continued. Skeptical voices, including my own, started pushing back against the bogus claims. As they did, a series of accounts—including hacked profiles of real people—began tweeting a poorly written statement arguing the #dcblackout was a hoax, using the emerging counter-hashtag #dcsafe. There were so many of these identical tweets that onlookers quickly noticed the accounts, leading to more confusion. Many people fell for this second wave, thinking that if bots wanted them to think #dcblackout was a hoax, then there must be some truth to it. The result was chaos.

All this adds up to something that looks a lot like an intentional disinformation campaign—though it’s hard to say for sure. It is not entirely clear if the original #dcblackout posts were genuine, but the choice to spread that theory seems to have been calculated. By hijacking the #dcprotests hashtag, the #dcblackout tweets were strategically placed where they would be found by D.C. residents checking in on new developments early in the morning on June 1. This behavior, especially after four days of ongoing protests, was easily predictable. Protests and arrests had continued late into the night on the previous days, leading many to check for updates first thing in the morning.

This created an opportunity for abusing what a recent Data & Society report calls a “data void.” Data voids are gaps in authoritative content, where innocent online searches result in users stumbling across problematic and manipulative content because there is nothing to outrank it in search results. One example, documented in 2018, was the phrase “black on white crimes.” Since this wording was used almost exclusively by white supremacist organizations, the results from this search were skewed toward white supremacist disinformation.

Breaking news can also create a data void. In this case, a search of “#dcprotests” on Twitter returned popular and recent tweets spreading alarm, with no guarantee of content quality or veracity. Soon the data gap was filled. Reporters noted that their phones never lost internet service, and analysts showed cellular connectivity levels were stable. However, these corrections did not reach nearly as many people as the original rumor—a result that is consistent with research on rumors on Twitter.

In the end, Twitter suspended hundreds of “spammy accounts” responsible for “coordinated activity” in spreading the hashtag. As to who was behind the accounts, that remains unclear. Any further probe would require access to Twitter’s data or an investigation by the platform itself.

A New Strategy for Influence Campaigns?

#dcblackout was a different kind of influence operation. Past campaigns have slowly and deliberately accumulated thousands of unsuspecting followers over years while subtly interspersing disinformation between memes and popular content. But #dcblackout created a burst of disinformation, taking advantage of sustained attention on a specific hashtag (#dcprotests) related to an impassioned political issue.

This is not a fluke. Rather, it reflects a changing environment for disinformation campaigns. These campaigns have to put content where they know people will see it. And they cannot rely on their own followers, because they don’t have any. The bot-like accounts used to disseminate #dcblackout had almost no followers. Many had been created that very morning.

Increased scrutiny by social media companies has forced this change in tactics. The largest social media platforms, especially Facebook and Twitter, are no longer oblivious to the propaganda networks on their platforms.

It was these networks that allowed the spread of disinformation to more than 100 million Americans in the run-up to the 2016 election. Starting in 2015, the Russian state-backed Internet Research Agency (IRA) operated hundreds of accounts, primarily across Instagram, Facebook, YouTube and Twitter. Through this cross-platform effort, the IRA was able to build an organic following, with more than 3 million followers on both Instagram and Facebook, by mixing popular content and harmless memes along with more nefarious political messaging. IRA-linked posts earned more than 70 million engagements on both Twitter and Facebook, and 185 million engagements on Instagram. While the IRA did use some paid advertising, the reach of its unpaid content was much greater, with accounts like @blackstagram_ often getting more than 10,000 likes per post in 2017. These networks were so effective because they were able to build followers over years and rely on the organic spread of material by that community.

But these expansive networks of accounts are now frequently taken down. For influence campaigns, this can undermine years of work to gather followers. A takedown deletes the suspicious accounts, which therefore lose their community of unwitting followers, upon which the disinformation campaign depends. Since takedowns are effective at undoing the patient work of building propaganda networks, it is unsurprising that disinformation campaigns need a new approach. That new approach appeared with #dcblackout, which targeted an intense political moment, giving propagandists a brief opportunity to engage wider audiences.

While takedowns are valuable, platforms usually suspend and ban accounts based on the presence of coordinated inauthentic behavior, not the content of their posts. There are a few exceptions to this. Facebook reduces the distribution for some of the small number of articles that it fact checks, and many of the platforms are proactively managing misinformation about the coronavirus. However, most misinformation—that is, any false information—goes unaddressed by Facebook. So, for example, Instagram makes no effort to reduce the enormous amount of pseudoscientific health advice and misleading lifestyle content that pervades the website.

Instead, platforms focus on disinformation, which is defined as misinformation that is being intentionally spread in order to mislead people. But this creates an ongoing problem for social media companies. When disinformation campaigns amplify authentic, but misleading, messages and convince normal people to share the material, it enters a gray area of enforcement. The platforms may ban the bots, as Twitter did following #dcblackout, but they typically won’t restrict the content. This is why researchers and platforms need to look past the bot accounts and consider how to better engage the public about their role in spreading disinformation.

Why #dcblackout Is Dangerous

Perhaps even more so than in 2016, the domestic conditions are ripe for disinformation. Some research suggests that polarization, partisan media, weak public broadcasting, low trust in journalism, the presence of populist politics and high social media use can all undermine resilience to disinformation. This does not bode well for the United States. As of 2018, 68 percent of Americans get news from social media, though only 20 percent do so often. American politics, especially conservative media, has become far more partisan, which can make the truth seem less relevant. Trust in media is down to close to 40 percent. Newspaper journalism is taking a beating, too: Newsroom employment dropped by 51 percent between 2008 and 2019, and that staggering decrease does not include the thousands of journalists who have lost their jobs due to the coronavirus pandemic.

The individual vulnerabilities of human psychology are also hard to improve. Disinformation efforts target impassioned political debates to exacerbate existing divisions but also because these circumstances enable disinformation. These moments are characterized by motivated reasoning, where critical thinking is undermined by the emotional desire to believe arguments that agree with preexisting conclusions. Further, a series of experiments show that people are less skeptical of claims when they are in group settings, including on social media.

Showing corrections to people exposed to misinformation is worth doing—the academic consensus now supports the idea that corrective fact-checking is effective. Unfortunately, even when people see corrections, some of the damage persists. The illusory truth effect suggests that repeated assertions become familiar and quickly enter listeners’ understanding of the world, even if they are exposed to countervailing facts. Repetition leads to recall: Once a person has heard the claim many times, it comes to mind quickly, giving it a false sense of validity.

This means that those who saw the #dcblackout tweets may be more amenable to the idea that the government can cut off communications and dispose of troublemakers. Generating this wariness of authorities also enables future disinformation, as believing conspiracy theories is associated with distrust for authoritative and expert figures.

The immediate effect of the #dcblackout operation is not trivial. Eye-catching, but ultimately false, assertions that th

[…]


Read the original article: How Platforms Can Prevent Misinformation Like #dcblackout