As of Monday, Meta is taking a stricter approach to copied content on Facebook. They’re going after accounts that share text, pictures, or videos created by others without proper permission. The move aims to protect authentic creators and curb the rise of low-quality, AI-generated posts, following a similar policy update from YouTube just days ago.
This year alone, Meta has axed around 10 million profiles impersonating big-name content creators, the company said. It also took action against 500,000 accounts caught in “spammy behavior” or fake engagement, like posting repetitive content or inflating likes. These accounts face penalties like demoted comments, slashed post visibility, and temporary bans from monetization programs. Meta’s goal? To make sure original creators get the spotlight, not copycats.
The surge of AI tools has made it easier to churn out what’s being called “AI slop”—low-effort videos or posts stitched together from existing clips, often with AI narration or minimal edits. This rise in low-quality content has raised concerns about AI-driven scams, where deceptive posts mimic legitimate creators to exploit viewers. Meta’s fighting back with tech that detects duplicate videos and cuts their reach, ensuring the original gets the views. They’re also testing a feature that links duplicated videos back to the source, a tactic already used on Instagram to swap out reposted Reels with originals, helping creators boost their visibility authentically.
Meta stressed it’s not targeting users who engage creatively, like making reaction videos or adding their own spin to trends. They’re particularly looking for accounts that often repost content from others, or that claim to be the original creator when they’re not. “Creators should focus on authentic storytelling,” Meta advised, warning against slapping watermarks on someone else’s content or posting short, low-value clips. High-quality captions matter too—unedited AI-generated ones won’t cut it.
To help creators stay on track, Meta’s rolling out new tools in Facebook’s Professional Dashboard. A post-level insights feature shows why certain posts aren’t getting traction, while a Support section flags risks to monetization or distribution. The new rules will be put into place slowly over the coming months. This way, creators will have enough time to adjust.
Meta is facing a lot of criticism because its automatic systems are being too strict with moderation. Almost 30,000 people have signed a petition asking Meta to fix accounts that were wrongly shut down and to offer better help from real people. Small businesses have been particularly affected by these issues. Even though Meta hasn’t said anything publicly, its efforts to cut down on spam and encourage original content suggest it’s trying to improve the platform generally.
With AI making it easier to flood social media with recycled content, Meta’s moves align with industry trends. YouTube recently tightened its rules on mass-produced videos, clarifying that AI-enhanced content is fine if it’s original and not spammy. Meta, meanwhile, is investing heavily in AI—CEO Mark Zuckerberg announced plans to spend “hundreds of billions” on AI infrastructure, including a supercluster set for 2026. For now, creators are urged to double down on originality to thrive on Facebook’s evolving feed.