As more and more people are being forced to work from home due to the current coronavirus pandemic, YouTube has made some changes to its moderation system. Now if a video is flagged for potentially violating the site’s terms of service, it’s likely to be removed by AI rather than a human reviewer. So, in short, expect mistakes to be made.
Usually videos are flagged by AI and then reviewed by humans, who, according to The Verge, work from specific offices that are set up to control the risk of exposing sensitive user data. But without access to that controlled environment, YouTube is having to make a change to the process.
“As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers,” the blog post states. “This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.
“As we do this, users and creators may see increased video removals, including some videos that may not violate policies. We won’t issue strikes on this content except in cases where we have high confidence that it’s violative. If creators think that their content was removed in error, they can appeal the decision and our teams will take a look. However, note that our workforce precautions will also result in delayed appeal reviews.”
To counter any mistakes, YouTube says content creators can still appeal any decisions – but the review process for these appeals will be slower than usual. The company won’t issue many strikes either to prevent users getting unnecessary bans.