General Monitoring is not the Answer to the Problem of Online Harms

Even if you think that online intermediaries should be more proactive in detecting, deprioritizing, or removing certain user speech, the requirements on intermediaries to review all content before publication—often called “general monitoring” or “upload filtering”—raises serious human rights concerns, both for freedom of expression and for privacy.

General monitoring is problematic both when it is directly required by law and when, though not required, it is effectively mandatory because the legal risks of not doing it are so great. Specifically, these indirect requirements incentivize platforms to proactively monitor user behaviors, filter and check user content, and remove or locally filter anything that is controversial, objectionable, or potentially illegal to avoid legal responsibility. This inevitably leads to over censorship of online content as platforms seek to avoid liability for failing to act “reasonably” or remove user content they “should have known” was harmful.

Whether directly mandated or strongly incentivized, general monitoring is bad for human rights and for users. 

  • As the scale of online content is so vast, general monitoring commonly uses automated decision-making tools that reflect the dataset’s biases and lead to harmful profiling.
  • These automated upload filters are prone to error, are notoriously inaccurate, and tend to overblock legally protected expressions.
  • Upload filters also contravene the foundational human rights principles of proportionality and necessity by subjecting users to automated and often arbitrary decision-making.
  • The active observation of all files uploaded by users has a c

    […]
    Content was cut in order to protect the source.Please visit the source for the rest of the article.

    This article has been indexed from Deeplinks

    Read the original article: