
Meta Releases Muse Spark, Its First Proprietary AI Model to Challenge ChatGPT and Gemini
Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.
Key takeaways Users are bypassing OpenAI's safeguards to create inappropriate videos featuring AI-generated children on Sora 2, including fake toy commercials with sexual undertones. Reports of AI-gen...

Fake commercials for adult novelty items presented as children's toys, along with other fetish-oriented content depicting minors, migrated from Sora to TikTok and other platforms, raising serious concerns about the app's content moderation capabilities.
The videos feature AI-generated children interacting with products in ways designed to signal sexual content to certain audiences.
Some fake advertisements showed rose-shaped water toys and other items described as squirting "sticky milk," "white foam," or "goo" onto lifelike images of children.
While not explicitly pornographic, the content's intent becomes clear through accompanying captions and comments sections, where some users shared requests to connect on Telegram, a platform law enforcement has identified as a hub for predatory networks.
Following inquiries from media outlets, OpenAI took action against accounts creating inappropriate content.
Niko Felix, spokesperson from OpenAI, stated: "OpenAI strictly prohibits any use of our models to create or distribute content that exploits or harms children.
We design our systems to refuse these requests, look for attempts to get around our policies, and take action when violations occur."
The company banned several accounts that were creating videos like the vibrating rose toy commercials.
However, the ease with which users circumvented OpenAI's guardrails has raised questions about whether current safety systems are adequate for preventing subtle forms of exploitation.
Mike Stabile, public policy director at the nonprofit Free Speech Coalition, who has over 20 years of experience working in the adult industry and understanding content moderation, believes AI companies need more sophisticated approaches.
"We already see this struggle with platforms like Facebook. How do they differentiate between a parent sharing a picture of their kid playing in a pool or the bath, versus somebody who's sharing something that's meant to be child sex abuse material?" Stabile told Wired.
He argues that OpenAI and similar companies must implement contextual nuance in their moderation practices, potentially including banning certain words associated with fetish content and improving moderation teams with more diversity and training.
The problem extends beyond Sora 2.
New 2025 data from the Internet Watch Foundation in the UK shows that reports of AI-generated child sexual abuse material have doubled in one year, from 199 between January and October 2024 to 426 in the same period of 2025.
Of this content, 56 percent falls into Category A, the UK's most serious category involving penetrative sexual activity, sexual activity with an animal, or sadism. Girls make up 94 percent of illegal AI images tracked by the organization.
Kerry Smith, chief executive officer of the Internet Watch Foundation, emphasized the gendered nature of the problem: "Often, we see real children's likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online."
Smith added: "We want to see products and platforms which are safe by design, and encourage AI companies to do as much as they can to make sure their products can not be abused to create child sexual abuse imagery."
The influx of AI-generated harmful material has prompted legislative action.
The UK is introducing an amendment to its Crime and Policing Bill that will allow authorized testers to check whether artificial intelligence tools are capable of generating child sexual abuse material.
The amendment would ensure AI models have safeguards around specific images, including extreme pornography and non-consensual intimate images.
TikTok also responded to the issue after being alerted to inappropriate content.
A TikTok spokesperson said the platform removed videos and banned accounts that uploaded content created on other AI platforms, which violated TikTok's strict minor safety policies.
Amazon Names New AI Chief Amid Battle To Take On Tech Rivals
China Completes Prototype Chip-Making Machine In A Push For Semiconductor Independence
Florida Governor Defends State’s Right To Regulate AI Despite Trump Executive Order

Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.

Key takeaways OpenAI has signed a multi-year, $10 billion agreement with AI chipmaker Cerebras Systems to secure computing infrastructure. The deal will deliver 750 megawatts of computing power throug...

KEY TAKEAWAYS: xAI implemented restrictions preventing Grok from editing images of real people in revealing clothing after global backlash California Attorney General Rob Bonta launched investigation...