3 min read

The Responsibility of Technology Companies in the Age of Digital Human Rights Documentation

A YouTube notice that a video is unavailable.

Over the past few weeks, the video streaming website YouTube has removed thousands of videos and numerous channels of organizations and individuals documenting atrocities from the Syrian conflict. Although some channels and videos were restored following complaints, many significant videos are still missing. The purge is part of a Google effort to implement machine learning technology that automates the removal of videos that purportedly violate YouTube’s Community Standards. While the automated removal system has significantly decreased the number of videos that promote violence, an unintended consequence has been the loss of evidence for current and future accountability efforts in Syria. With today’s technology, social media companies can and should accommodate human rights in their systems and policies.

YouTube has not always been equated with human rights documentation. Traditionally, human rights groups used pen and paper to record testimonies from victims and witnesses in order to pursue accountability or promote justice and rights norms. Even today, interviews remain essential to this effort, but digital tools have expanded our ability to document atrocities, and to do so in real time. Now anyone with a smartphone is able to upload a video online and contribute to human rights initiatives. Social media content, however, has had its skeptics. Many prosecutors and courts have been hesitant to forego their traditional conceptualization of chain of custody and authentication in favor of open source research and case building. However, the International Criminal Court (ICC) recently issued a warrant for the arrest of Mahmoud Al-Werfalli, a Libyan militia commander who has been accused of committing dozens of murders in the Benghazi area, on the basis of seven social media videos, including one from Facebook. While the ICC is certainly not the first court to rely heavily on social media, this decision marks a momentous turning point for international justice.

The proliferation of social media has no doubt led to a watershed moment. Syria, in particular, has become a testing ground for social media documentation because of the unprecedented volume of videos recorded and uploaded by activists and citizen journalists to platforms such as Facebook and YouTube to publicize atrocities that in the past would go unreported.  YouTube often retains the only version of a video available, making its removal that much more consequential.

There are good reasons why companies rely on machine learning technologies to review content on social media platforms. Machine learning is cost effective in comparison to manual review. Every minute, 300 hours of video are uploaded to YouTube, and that number is always growing. It requires tremendous human effort to individually review each video uploaded every day. And the task is not an envious one. According to Keith Hiatt, Vice President of Human Rights at the technology nonprofit Benetech, manual reviewers may suffer secondary trauma from being exposed to violent and graphic content. Machines can mitigate this trauma by reducing the number of videos that need to be seen by human reviewers.

Machine learning technology has also been able to remove content that contains hate speech and extremist propaganda. European governments such as Germany have been integral in pressuring technology companies to stem the spread of such online content. Facebook has made the biggest strides in this regard, garnering praise from the European Commission for the increased speed by which it removes content deemed illegal in much of Europe. Google’s recent efforts on YouTube also represent a major step towards the company identifying and removing extremist content.

While machine learning technology is cost-effective and efficient, in its current form, it is not sufficiently nuanced to differentiate accurately between extremist propaganda and documentation from a human rights group. As a result, technology companies are inadvertently hindering accountability and justice efforts by limiting the amount of information that can be accessed by investigators and prosecutors. In the case of YouTube, once content is removed, appeals can only be filed once per violation. If the initial “strike” is upheld, users must wait 60 days before appealing any future strikes to their content. Furthermore, when reinstating a channel, YouTube may not reinstate the entirety of the channel’s content. And users who have died during the course of the conflict have no chance to challenge the company’s censorship.

Hiatt noted two potential changes to Google’s removal process that would help mitigate these errors. The first is pausing and reversing its recent automated removal decisions in order to review the machine’s performance. According to Hiatt, a testing process that uses a sample set of known journalists and human rights organizations can help fix the algorithm – teaching the software not to delete certain types of videos by feeding it a more accurate training set. Second, Google should employ what Hiatt calls “algorithmic transparency.” By talking to human rights organizations and explaining the criteria by which content is deleted, the groups can adjust their upload behaviors and better understand the process for appeals.

Technology companies must recognize the tremendous accomplishments human rights documenters can achieve with online data. With today’s technology capabilities, the products and policies of companies like YouTube can significantly impact these achievements. As such, it is vital that these corporations develop best practices to ensure safety and security do not compromise current and future human rights accountability efforts.

For more information or to provide feedback, please contact SJAC at [email protected].