TikTok has faced several problems in the past owing to its human moderation system. Now the company is vowing to accelerate its moderation cycle by replacing human reviewers with more efficient automated systems. TikTok will start using the new automated review systems in US and Canada first to filter out videos featuring nudity, sex, violence, graphic content, illegal activities, and violations of its minor safety policy. When the news system stumbles upon a video in these categories, it’ll be deleted immediately, and a chance will be given to the creator to face a human moderator.
Before the new systems were announced the company made its human moderators go through all the videos on its platform in the US before they were deleted, a spokesperson from TikTok disclosed to The Verge. The new system was put in place to lower the “volume of upsetting videos” moderators will need to scour through and allow them to deal with trickier clasps, similar to deception. In other companies like Facebook, the human moderators induced PTSD-like symptoms due to watching flagged videos.
The only issue with such automated systems is they are not fully reliable, and the errors will unnecessarily punish a few communities via automated takedowns. TikTok already has a background marked by discriminatory moderation and was as of late reprimanded for pulling the intersex hashtag twice. TikTok says it’s beginning to utilize mechanization just where it’s generally solid; it’s been trying the tech in different nations, including Brazil and Pakistan, and says just 5% of the recordings its systems eliminated really ought to have been permitted up. TikTok says it’ll be carrying out the automated review system “throughout the following not many weeks.”