Can the Individual User Make the Internet a Safer Place?

The dark side of the internet is becoming increasingly visible. How can we stop this content from leaking onto our feeds and into our minds?

Since I started reporting on the tech industry I’ve noticed a micro-trend: every week, the majority of news stories I write are all interconnected, linking back to the latest issue at the heart of Silicon Valley or (overwhelmingly, these days) the White House. This week, outside of the latest product releases and the never-ending TikTok saga, the focus has been on disturbing content. 

That’s not to say that news about content moderation policies and the circulation disturbing graphic and violent footage is new – it’s not – but, it got me thinking about whether we, the individual users, can do anything to stop this content from leaking onto our feeds and into our minds. 

Look: I’ll be the first to say I spend too much time on Twitter and Instagram. In fact, last week I spent just under 19 hours on social media, which was just under half of the total amount of time I spent looking at my phone at all. I don’t necessarily feel guilty about this (despite watching Netflix’s extremely confronting The Social Dilemma, which gave me the fear for all of 90 minutes). Social media is fun, yes. But more than that, it’s part of our everyday life; a necessity, almost, especially during the pandemic when seeing friends and family is basically a myth. 

But the more we use these apps; the more influence the internet at large has over our lives, the more the cracks are beginning to show. And the dark side of the internet is becoming increasingly visible. 

This week, the day after YouTube announced it would be shifting away from the AI content moderators it recruited mid-pandemic and back to more reliable, nuanced, and accurate human moderators, one of the company’s ex-moderators, who worked from them through an agency, filed a lawsuit against YouTube for negligence. The ex-staffer accused YouTube of failing to support its content moderators, who were developing mental health conditions relating to PTSD after viewing hours of disturbing videos of school shootings, executions, child abuse, and more. The same complaints were made against Facebook in 2018. 

Then, yesterday, we saw TikTok reaching out to its competitors to propose they join forces to tackle disturbing content online. This came after a video of a man dying by suicide, which was live-streamed on Facebook, was circulated 10,000 times on TikTok in what the company believe to have been a coordinated attack. The video was uploaded with deceiving thumbnails, making it difficult for moderators to act quickly.

This year, we’ve seen our fair share of disturbing content. Sadly, much of this has been in the form of videos of police brutality against Black people in America, the majority of which resulted in death. While it’s clear these videos inspired the latest wave of Black Lives Matter protests and shone yet another light on the issue of systemic racism, a lot of users have called for others to stop sharing this traumatizing content, which has since been branded “trauma porn”

Writing for The New Statesman in 2017, journalist Stephanie Boland recalled seeing images of a terror attack on London. She discussed how now, “as the line between traditional news and social media blurs,” we – as both consumers and participants in the news cycle – have a responsibility to tread carefully. For unsuspecting social media users, exposure to images of murder, terror, and graphic violence can be extremely traumatizing. I still think of the images that surfaced on Twitter three years ago, after a terror attack in my home city, Manchester. I remember how I felt when I saw them; that I wished I never had and never would again. 

Reading the ex-YouTube moderators’ account of what she had to deal with, it became clearer to me that we need to stop treating social media sites as purely technological. There are humans behind the scenes who, even more than individual users, are subject to horrendous imagery every single day. And those humans can make mistakes. When they do, it becomes our responsibility – as the humans on the other side of the screen – to do what’s right, by making informed decisions about what should and shouldn’t be circulated and who this content might affect.