YouTube now allows more harmful misinformation on its platform

YouTube now allows more harmful misinformation on its platform

YouTube is following in the potentially dangerous steps of Meta and X (formerly Twitter) by relaxing its content moderation policies. New internal training materials viewed by The New York Times instruct moderators to leave videos live if up to half its content violates YouTube’s policies, an increase from a quarter of it. The platform introduced the new policy in mid-December, a month after President Trump was re-elected.

The new guidelines reflect what YouTube deems as “public interest.” These areas include discussing or debating elections, movements, race, gender, immigration and more. “Recognizing that the definition of ‘public interest’ is always evolving, we update our guidance for these exceptions to reflect the new types of discussion we see on the platform today,” Nicole Bell, a YouTube spokesperson, told The New York Times. “Our goal remains the same: to protect free expression on YouTube while mitigating egregious harm.”

When Engadget reached out to YouTube, Bell shared more context on the decision. “We regularly update our Community Guidelines to adapt to the content we see on YouTube. As examples, earlier this year, we retired our remaining COVID-19 policies and added new protections related to gambling content,” Bell said. “The New York Times article is about a separate aspect of our approach: our long-standing practice of applying exceptions to our policies for content that serves the public interest or has EDSA (educational, documentary, scientific, artistic) context. These exceptions apply to a small fraction of the videos on YouTube, but are vital for ensuring important content remains available. This practice allows us to prevent, for example, an hours-long news podcast from being removed for showing one short clip of violence. We regularly update our guidance for these exceptions to reflect the new types of discussion and content (for example emergence of long, podcast content) that we see on the platform, and the feedback of our global creator community. Our goal remains the same: to protect free expression on YouTube.”

The platform has reportedly removed 22 percent more videos due to hateful and abusive content than last year. It’s not clear how many videos were reported or would have been removed under the previous guidelines.

YouTube reportedly told moderators to now value keeping content up if it’s a debate between freedom of expression and risk. For example, they were shown a video called “RFK Jr. Delivers SLEDGEHAMMER Blows to Gene-Altering JABS” which falsely stated that Covid vaccines can change people’s genes. However, YouTube told the moderators that public interest “outweighs the harm risk” and the video should stay up. It has since been removed, though the reason why is unclear.

Other videos allowed to remain online included one with a slur aimed at a transgender person and one in which a commentator discussed a graphic demise for former South Korean president Yoon Suk Yeol.

Update, June 9 2025, 12:27PM ET: This article has been updated to include YouTube’s statement to Engadget.

Leave a Comment

Your email address will not be published. Required fields are marked *