Key Takeaways
Tech advocacy organization Public Citizen has called on OpenAI to immediately pull its AI video generation app Sora 2 from public access, warning that the technology poses significant risks to democracy, privacy, and public safety.
The nonprofit sent a formal letter on November 11, 2025, to OpenAI CEO Sam Altman and submitted a copy to the U.S. Congress, escalating concerns about the rapid deployment of AI video technology.
The letter accuses OpenAI of demonstrating a "consistent and dangerous pattern of rushing to market with a product that is either inherently unsafe or lacking in needed guardrails," according to multiple news reports.
Public Citizen argues that Sora 2 shows "reckless disregard" for product safety, people's rights to their own likeness, and the stability of democracy.
Democracy and deepfake threats are at the forefront
J.B. Branch, Public Citizen's tech policy advocate who authored the letter, emphasized the potential political ramifications of the technology in an interview.
"Our biggest concern is the potential threat to democracy," Branch said. "I think we're entering a world in which people can't really trust what they see. And we're starting to see strategies in politics where the first image, the first video that gets released, is what people remember."
The Sora app allows users to create AI-generated videos from text prompts, producing content that ranges from amusing fake scenarios to realistic deepfakes.
Videos created with Sora have spread rapidly across social media platforms, including TikTok, Instagram, X, and Facebook. Popular genres include fake doorbell camera footage featuring unusual animal encounters and videos depicting public figures in fabricated situations.
However, advocacy groups and experts have raised alarms about more problematic content.
News outlet 404 Media reported on November 8, 2025, that social media accounts have posted dozens of AI-generated videos depicting women and girls being strangled, demonstrating failures in the platform's content moderation systems.
Branch noted that while OpenAI blocks nudity, "women are seeing themselves being harassed online" through other types of fetishized content that bypasses the app's restrictions.
He criticized OpenAI's approach to product development, stating the company is "putting the pedal to the floor without regard for harms.
Much of this seems foreseeable. But they'd rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users."
Industry pushback and OpenAI's response
OpenAI has faced mounting criticism from multiple quarters since launching Sora for iPhone users in October 2025.
The company expanded availability to Android devices last week in the U.S., Canada, and several Asian countries, including Japan and South Korea.
Significant opposition has emerged from the entertainment industry.
A Japanese trade association representing prominent studios, including Hayao Miyazaki's Studio Ghibli, along with video game companies Bandai Namco and Square Enix, complained about unauthorized AI-generated content featuring copyrighted characters.
OpenAI has also faced backlash from the estates of public figures after videos emerged showing fabricated depictions of Michael Jackson, Martin Luther King Jr., and Mister Rogers.
In response to the Japanese concerns, OpenAI stated: "We're engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2, including in Japan, where cultural and creative industries are deeply valued."
The company acknowledged implementation challenges shortly after the initial launch, admitting that "overmoderation is super frustrating" for users while noting the importance of being conservative "while the world is still adjusting to this new technology."
Broader concerns about AI safety
Public Citizen's letter comes amid wider scrutiny of OpenAI's product safety practices.
Seven lawsuits filed last week in California courts allege that the company's ChatGPT chatbot contributed to user suicides and psychological harm.
The lawsuits claim OpenAI released its GPT-4o model prematurely despite internal warnings about manipulative behavior, with four plaintiffs having died by suicide.
While Public Citizen was not involved in those lawsuits, Branch drew parallels between the two situations, seeing both as examples of prioritizing rapid market entry over adequate safety testing.
OpenAI did not immediately respond to requests for comment on Public Citizen's letter calling for Sora 2's withdrawal.
The company has made some content moderation adjustments since launch, including agreements with the estate of Martin Luther King Jr. in late October and implementing guardrails to prevent the generation of well-known copyrighted characters without permission.
Read more: