An in-depth look at how advanced AI video generation is being exploited for hate speech on social media platforms.
HM Journal
•
4 months ago
•

The world of artificial intelligence is moving at a breakneck pace, isn't it? Every other week, it seems, there's a new breakthrough, a new model that pushes the boundaries of what we thought possible. Google's Veo 3, released just a couple of months ago in May, certainly fits that bill. It represents a truly startling leap in AI video quality, producing visuals that are, frankly, pixel-perfect. But here's the rub: with great power often comes great potential for misuse. And we're seeing that play out in a deeply troubling way on TikTok right now.
While many AI-generated videos are harmless fun, the sheer realism of Veo 3's output makes it a potent tool for those with less-than-harmless intentions. On TikTok, a platform that might even face a ban in some regions soon, users have noticed a disturbing surge in racist AI videos. And guess what? Many of them bear the tell-tale "Veo" watermark, confirming their origin from Google's cutting-edge AI.
A recent report from MediaMatters laid bare the extent of this problem. Numerous TikTok accounts have been actively posting AI-generated content that leverages deeply offensive racist and antisemitic tropes. The vitriol is often aimed at Black people, depicting them with abhorrent stereotypes—as "the usual suspects" in crimes, as absent parents, or even as monkeys with an inexplicable affinity for watermelon. It's truly disgusting. But the hate doesn't stop there; immigrants and Jewish people are also targeted. These short videos, usually topping out at around eight seconds, are designed to provoke, to shock, and ultimately, to drive engagement through anger and drama. And it's working. The original posts are reportedly flooded with comments that echo and amplify these hateful stereotypes.
Both TikTok and Google have clear, explicit policies against this kind of content. TikTok's community guidelines are unambiguous: "We do not allow any hate speech, hateful behavior, or promotion of hateful ideologies. This includes explicit or implicit content that attacks a protected group." Sounds good on paper, right? Google, too, has a comprehensive Prohibited Use Policy for its generative AI services, banning anything that promotes hate speech, harassment, bullying, or abuse.
As for Google's role, one might expect Veo 3 to simply refuse to generate such content. After all, AI models are supposed to have guardrails. And Google has often emphasized security when rolling out new AI. But it seems Veo 3, in some instances, is proving to be more "compliant" than intended. We've seen how easy it can be for users to skirt these rules. The vagueness of prompts, or the AI's current inability to grasp the subtle, insidious nature of certain racist tropes (like using monkeys instead of humans to avoid direct racial slurs), makes it easy for bad actors to get around the system. It's a cat-and-mouse game, and right now, the mice are winning too often.