The latest on Grok’s gross AI deepfakes problem
See all Stories
J
X safety teams ‘repeatedly warned management’ about undressing tools.
While X has long allowed NSFW images, The Washington Post reports that the platform’s content moderation filters couldn’t handle the estimated millions of sexualized deepfakes of real women and children being generated by Grok.
“For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.”
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Loading comments
Getting the conversation ready...
Most Popular
Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- The unraveling of Dan Crenshaw
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD
- Framework is building a better couch keyboard because everyone hates the Logitech one











