Key takeaways
Fake commercials for adult novelty items presented as children's toys, along with other fetish-oriented content depicting minors, migrated from Sora to TikTok and other platforms, raising serious concerns about the app's content moderation capabilities.
The videos feature AI-generated children interacting with products in ways designed to signal sexual content to certain audiences.
Some fake advertisements showed rose-shaped water toys and other items described as squirting "sticky milk," "white foam," or "goo" onto lifelike images of children.
While not explicitly pornographic, the content's intent becomes clear through accompanying captions and comments sections, where some users shared requests to connect on Telegram, a platform law enforcement has identified as a hub for predatory networks.
OpenAI responds to violations
Following inquiries from media outlets, OpenAI took action against accounts creating inappropriate content.
Niko Felix, spokesperson from OpenAI, stated: "OpenAI strictly prohibits any use of our models to create or distribute content that exploits or harms children.
We design our systems to refuse these requests, look for attempts to get around our policies, and take action when violations occur."
The company banned several accounts that were creating videos like the vibrating rose toy commercials.
However, the ease with which users circumvented OpenAI's guardrails has raised questions about whether current safety systems are adequate for preventing subtle forms of exploitation.
The moderation challenge
Mike Stabile, public policy director at the nonprofit Free Speech Coalition, who has over 20 years of experience working in the adult industry and understanding content moderation, believes AI companies need more sophisticated approaches.
"We already see this struggle with platforms like Facebook. How do they differentiate between a parent sharing a picture of their kid playing in a pool or the bath, versus somebody who's sharing something that's meant to be child sex abuse material?" Stabile told Wired.
He argues that OpenAI and similar companies must implement contextual nuance in their moderation practices, potentially including banning certain words associated with fetish content and improving moderation teams with more diversity and training.
Rising concerns about AI-generated abuse material
The problem extends beyond Sora 2.
New 2025 data from the Internet Watch Foundation in the UK shows that reports of AI-generated child sexual abuse material have doubled in one year, from 199 between January and October 2024 to 426 in the same period of 2025.
Of this content, 56 percent falls into Category A, the UK's most serious category involving penetrative sexual activity, sexual activity with an animal, or sadism. Girls make up 94 percent of illegal AI images tracked by the organization.
Kerry Smith, chief executive officer of the Internet Watch Foundation, emphasized the gendered nature of the problem: "Often, we see real children's likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online."
Smith added: "We want to see products and platforms which are safe by design, and encourage AI companies to do as much as they can to make sure their products can not be abused to create child sexual abuse imagery."
Regulatory response
The influx of AI-generated harmful material has prompted legislative action.
The UK is introducing an amendment to its Crime and Policing Bill that will allow authorized testers to check whether artificial intelligence tools are capable of generating child sexual abuse material.
The amendment would ensure AI models have safeguards around specific images, including extreme pornography and non-consensual intimate images.
TikTok also responded to the issue after being alerted to inappropriate content.
A TikTok spokesperson said the platform removed videos and banned accounts that uploaded content created on other AI platforms, which violated TikTok's strict minor safety policies.
Read more:
Amazon Names New AI Chief Amid Battle To Take On Tech Rivals
China Completes Prototype Chip-Making Machine In A Push For Semiconductor Independence
Florida Governor Defends State’s Right To Regulate AI Despite Trump Executive Order