TikTok Removed Some 105 Million Rule-Violating Videos in the First Half of 2020

Many of which included the vastly circulated video of a suicide

Earlier this month, a graphic and disturbing video was littered across video-sharing platform TikTok. Hidden by false thumbnails and uploaded over 10,000 times, the video of 33-year-old Ronnie McNutt dying by suicide spread across the platform traumatizing millions of unsuspecting users. 

The video appeared on TikTok after it had first been live-streamed on Facebook in late August. Today, TikTok wrote to a number of social media platforms calling on them to join forces in removing and moderating disturbing content. 

According to TikTok’s Europe’s public policy head, Theo Bertram, the spike in videos suggested a coordinated attack. “This is an industry-wide challenge, which is why we have proposed to peers across the industry that we work together on creating a ‘hashbank’ for such violent, graphic content and warn each other when such content is discovered so that we can all better protect our users, no matter the app they use,” he said. 

TikTok wrote to nine of its competitors – including Facebook, Instagram, Google, YouTube, Twitter, Twitch, Snapchat, Pinterest, and Reddit – proposing a memorandum of understanding “that will allow us to quickly notify one another of such content,” reports Business Insider. 

In response to the issue, TikTok has promised to adjust its machine learning and emergency systems, along with its moderating processes, allowing AI and human moderators to work better together. 

Social media platforms have been under immense pressure to take down both disturbing and misleading content. The problem has been especially bad throughout the pandemic, with the rise of conspiracy theories, health misinformation, and political disinformation spreading rapidly. Facebook has come under much scrutiny for allowing the dangerous US-based conspiracy, QAnon, to run rampant on the platform, spreading like wildfire. 

Yesterday, YouTube announced it would be moving back to human moderating due to too many inaccuracies in AI moderating decisions. However, today, an ex-content moderator for the platform has sued YouTube for negligence and unlawfully failing to provide a safe working space for herself and other content moderators who are now suffering the effects of PTSD. 

Like YouTube, TikTok has been relying heavily on AI and machine learning to tackle content moderation. TikTok’s latest transparency report revealed that 105 million pieces of content were taken down for violating its rules in the first half of 2020 alone (again, no doubt a symptom of the pandemic and stay at home orders). That was double the number of videos removed in the final half of 2020 – though the platform has grown rapidly since then. 

While nearly 97% of the videos were removed proactively by TikTok (with 90.32% of them removed before receiving any views), the rapid rate at which the video of McNutt’s death circulated is a cause for concern and it only seems right that other platforms take TikTok’s lead and join forces to create a safer web.