Alphabet Inc’s Google recently said that it erased nearly 5 million videos from its famous video-sharing website YouTube in the fourth quarter of 2017 for violating its policies. The videos were removed before viewers could see them, highlighting the company’s response to criticism by government authorities that it is not doing enough to delete extremist content.
Some leading advertisers also boycotted the service for a short time after seeing their ads running alongside videos that had inappropriate content.
YouTube said on Monday that automating enforcement via software is helping in quicker removal of such content.
The company also erased an additional 1.6 million videos on the request of viewers, but said it still needs to make sure those removals were made according to its policies. The aforesaid videos were not identified by its automated system.
Separately, social network giant Facebook also recently revealed that it had deleted or put a warning tag on 1.9 million posts linked to al-Qaeda or ISIS during the first quarter of 2018, up more than 50 percent as compared to the same period last year.
Officials at YouTube said the company deletes videos that include hate speech or trigger violence. It warns the uploaders of such videos by issuing “a strike” and if the uploaders receive three strikes during a three-month period, they are banned from the streaming website.
The company is also working on removing content with false information, though it is facing difficulties in enforcing a truth policy. It could remove fabricated news if it harasses its subject, but it is slow in identifying such content.
Last year, YouTube also started erasing videos showing a child in danger. However, it doesn’t notify law enforcement agencies or intellectual property landlords about the content as it can’t easily recognize uploaders and rightsholders.