YouTube Deletes 5 Million Videos for Content Violation

Advertisement
By Reuters | Updated: 25 April 2018 11:29 IST

YouTube, owned by Alphabet's Google, deleted about 5 million videos from its platform for content policy violations in last year's fourth quarter before any viewers saw them, it said in a new report that highlighted its response to pressure to better police its online community.

YouTube has been criticised by governments that say it does not do enough to remove extremist content, and by advertisers, such as Procter & Gamble Co and Under Armour that briefly boycotted the service when they unwittingly ran ads alongside videos the companies deemed inappropriate.

Advertisement

YouTube said in the report Monday that automating enforcement through software "is paying off" in quicker removals. The company said it did not have comparable data from prior quarters.

YouTube said it still needed an in-house team of humans to verify automated findings on an additional 1.6 million videos that were removed only after some users watched the clips.

Advertisement

The automated system did not identify another 1.6 million videos that YouTube took down once they were reported to it by users, activist organisations and governments.

 

"They still have lots of work to do but they should be praised in the interim," Paul Barrett, who has followed YouTube as deputy director at the New York University Stern Center for Business and Human Rights, said.

Advertisement

Facebook also said on Monday it had removed or put a warning label on 1.9 million pieces of extremist content related to ISIS or al-Qaeda in the first three months of the year, or about double the amount from the previous quarter.

Corralling problematic videos, whether through humans or machines, could help YouTube, a major driver of Google's revenue, stave off regulation and a sales hit. For now, analysts say demand for YouTube ads remains robust.

Advertisement

The following are steps that YouTube has taken.

Extremism
YouTube officials say the company removes videos that contain hate speech or incite violence. It issues "a strike" to the uploader in each instance and bans uploaders with three strikes in a three-month period. Also banned are government-identified "terrorist organizations" and materials such groups would upload if they could. YouTube shares the digital fingerprints of removed videos with a consortium of tech companies.

Borderline videos get stamped "graphic" and stripped of features that would give them prominence. YouTube added options for advertisers to avoid sponsoring these videos last year.

YouTube automated scans have sped up takedowns of videos tied to ISIS or al-Qaeda. But it has struggled to draw a line on views espoused by white right-wing extremists, who tend to know the rules well and stop short of overt hate speech.

Misinformation
YouTube said it would be difficult to enforce a "truth" policy, leaving the company to look for other policy violations to remove videos with misleading information.

For instance, YouTube could delete a fabricated news report by finding it harasses its subject.

Since autumn, it has promoted "authoritative sources" such as CNN and NBC News in search results to push down problematic material. YouTube also plans to display Wikipedia descriptions alongside videos to counter hoaxes.

But YouTube still is cited as slow to identify misinformation amid major global breaking news events when video bloggers quickly upload commentary. The company preserves other challenged clips that have public interest value or come from politicians.

Child endangerment
YouTube last year began removing videos and issuing strikes when the filming may have put a child in danger or when a cartoon character is used inappropriately.

YouTube does not alert law enforcement or intellectual property owners about these videos because YouTube says it cannot easily identify uploaders and rightsholders. Copyright owners that believe a video violates guidelines or infringes their copyright or trademark can report it to YouTube.

The company last year begin stepping up moderation of comments that inappropriately reference children.

© Thomson Reuters 2018

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Smartphones Launched in India (April 2026): Top Handsets Launched in April
  1. James Webb Space Telescope Little Red Dots May Reveal Birth of Black Holes
  2. Jolly O Gymkhana Now Available Online: Where to Watch This Tamil Reality Show
  3. Mustafa Mustafa OTT Release: Where to Watch the Tamil Friendship Comedy Online
  4. Batchmates (2026) Now Streaming Online: What You Need to Know
  5. Amazon Now Expands to More Indian Cities With New Micro Warehouses
  6. Amazon Prime Day 2026 India Sale Set for July: Here’s What to Expect
  7. Bakkt Acquires DTR to Build Stablecoin Settlement Layer
  8. Samsung India Mobile Chief Raju Antony Pullan Steps Down; Aditya Babbar to Reportedly Lead MX Operations
  9. Oppo Reno 16, Reno 16 Pro Set to Launch Later This Month; Pre-Reservations Begin
  10. Samsung Galaxy S26 Ultra Successor Might Skip the 3x Telephoto Rear Camera, Early Leak Suggests
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.