YouTube Says It Will Bring Back Human Evaluators Due to AI Failings

Approximately double the amount of videos were taken down in April - June this year compared to other quarters

YouTube has announced it will be bringing back human evaluators to moderate harmful content like hate speech and fake news due to the shortcomings of the AI moderators it has been using since late March.

Back in March, at the start of the lockdown period for many countries, YouTube announced it would be relying more on AI moderators than human moderators due to the inability to work from the office. However, the machine learning systems used by YouTube have been nowhere near as accurate and have left content creators with unwarranted takedowns and video removals, the company told the Financial Times. 

Speaking to the FT, Neal Mohan, YouTube’s chief product officer said: “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.

According to the FT, more than 11 million videos were taken down in this year’s second quarter (April – June) – much more than previous years. On top of that, over half of videos were reinstated after users appealed, which is significantly higher than the 25% in previous quarters. 

This shows the need for human moderators when vetting harmful or hateful content online – a job that can be extremely distressing. Platforms have been feeling the pressure to crack down on harmful content including misinformation and hate speech, particularly since the start of the pandemic and the huge Black Lives Matter movement earlier this year. 

Machines are invaluable when it comes to taking down rule-breaking videos. Mohan acknowledged the speed in which AI moderators are able to flag and act on such content: “Over 50 per cent of those 11m videos were removed without a single view by an actual YouTube user and over 80 per cent were removed with less than 10 views. And so that’s the power of machines.”

However, Mohan also noted the nuances that AI often fails to understand, stating, “That’s where our trained human evaluators come in.” Human evaluators take the content flagged up by AI and then, “make decisions that tend to be more nuanced, especially in areas like hate speech, or medical misinformation or harassment.”