Ex-YouTube Content Moderator Files Class Action Lawsuit for Negligence

The ex-employee says the company did not have a safe working environment for moderators

An ex-content moderator who was working at YouTube is suing the Google-owned company due to her developing symptoms of depression and anxiety related to post-traumatic stress disorder. 

The woman, who has not been named, is filing a class-action lawsuit against the popular video-sharing platform in order to protect others like her who have been subjected to viewing graphic, violent and disturbing content while working as content moderators on the platform and accuses YouTube of failing to provide a safe working environment for such employees. 

The lawsuit reads:

Every day, YouTube users upload millions of videos to its platform. Millions of these uploads include graphic and objectionable content such as child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder. To maintain a sanitized platform, maximize its already vast profits, and cultivate its public image, YouTube relies on people like Plaintiff —known as “Content Moderators”—to view those videos and remove any that violate the corporation’s terms of use.

The suit mentioned the high profile genocide in Myanmar and mass shootings in Las Vegas and Christ Church. It also including conspiracy theories, propaganda, political disinformation, and “fringe beliefs” as objectionable content the moderators are subject to viewing for up to four hours at a time. 

A similar lawsuit was filed against Facebook in 2018 by the same firm representing this unnamed YouTube content moderator. This year, Facebook settled, paying out £52 million to those affected. 

The prosecution also accuses YouTube of failing to follow its own advice. The company helped to draft work safety standards that would protect content moderators from psychological damage. According to the lawsuit, such standards include: 

  • obtaining a candidate’s informed consent during the initial employment interview process
  • providing Content Moderators with robust and mandatory counseling and mental health support 
  • altering the resolution, audio, size, and color of trauma-inducing images and videos
  • training Content Moderators to recognize the physical and psychological symptoms of PTSD, anxiety, and depression

“But,” reads the lawsuit, “YouTube failed to implement the workplace safety standards it helped create.

“Instead, the multibillion-dollar corporation affirmatively requires its Content Moderators to work under conditions it knows cause and exacerbate psychological trauma.”

According to CNet, the ex-moderator spoke to a number of wellness coaches to find out how to cope with the psychological trauma she was feeling. One told her to trust in God while another told her to take illegal drugs. As well, YouTube’s Human Resources department didn’t provide “any help” and, due to a non-disclosure agreement, it’s difficult for workers to speak out about their issues. 

The ex-moderator, according to the lawsuit, cannot be in crowded areas (due to the fear of a mass shooting), has lost friends, suffers from panic attacks and has trouble sleeping: “…when she does sleep, she has horrific nightmares,” reads the lawsuit.

“She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind.”

On top of this, the ex-moderator paid out of pocket for her own psychological treatment. With this lawsuit, she hopes to receive compensation for both herself and others in the same situation and to protect other content moderators still working for the company. 

Yesterday, YouTube said it would be bringing back human moderators due to some limitations in their AI moderators. At the start of the lockdown, due to employees not being able to work in offices, the company announced it would be recruiting machine-learning to moderate content on the platform. However, the AI wasn’t able to make nuanced decisions leading to almost double the amount of unnecessary video removals between April and June compared to previous quarters. 

Platforms like YouTube, Facebook, and Twitter rely on content moderators to make informed and crucial decisions about certain content, both to protect the platform’s users and its image. Like this lawsuit states, a lot of that can be intensely damaging. Maybe a more sophisticated AI is needed to continue to fight the uphill battle of content moderation, especially as a second wave of Covid-19 looms.