Key Takeaways
OpenAI released its upgraded Sora 2 video generation model on September 30, 2025, sparking immediate controversy as users flooded social media with AI-generated videos featuring copyrighted characters from Nintendo, Disney, and other major entertainment properties.
The new model's approach to intellectual property has drawn sharp criticism from legal experts who warn that the company is attempting to reverse fundamental principles of copyright law.
Copyright concerns mount over opt-out policy
Unlike traditional copyright protocols, OpenAI requires rights holders to actively opt out of having their copyrighted material used in Sora-generated videos, rather than seeking permission first.
According to The Wall Street Journal, blanket opt-outs are not available, forcing copyright holders to submit specific examples of offending content.
Rob Rosenberg, former Showtime Networks executive vice president and general counsel, characterized OpenAI's approach as reversing established legal norms.
"They're setting up this false bargain where they can do this unless you opt out. And if you don't, it's your fault. That's not the way the law works," Rosenberg said.
Ed Klaris, an intellectual property lawyer and professor at Columbia Law School, told The Hollywood Reporter that OpenAI's use of movies and TV shows without licensing is "really contrary to what copyright is meant to protect."
Mark McKenna, a law professor and faculty director of the UCLA Institute for Technology, Law, and Policy, drew a distinction between using copyrighted data to train AI models versus generating outputs that depict copyright-protected information.
"If OpenAI is taking an aggressive approach that says they're going to allow outputs of your copyright-protected material unless you opt out, that strikes me as not likely to work. That's not how copyright law works. You don't have to opt out of somebody else's rules," McKenna said.
McKenna added that "outputting visual material is a harder copyright question than just the training of models," and described the opt-out approach as following a "move fast and break things mindset."
Viral videos showcase copyrighted characters
Within the first day of release, users created videos featuring Nintendo characters Mario, Luigi, and Princess Peach, as well as Pikachu from Pokémon, storming Normandy's beaches.
The video generator produces content featuring recognizable movies, TV shows, and games, including Bob's Burgers, SpongeBob SquarePants, Gravity Falls, Grand Theft Auto, and Red Dead Redemption.
When prompted with "Rick and Morty universe, characters talking about OpenAI," Sora returned a video in the animation style of the show, with Rick saying in a voice identical to voice actor Ian Cardoni: "Morty, check it out. OpenAI dropped a new model so I wired it into a quantum subcluster and gave it my personality — well the usable parts."
Varun Shetty, OpenAI's head of media partnerships, said the company is "working with rights holders to understand their preferences for how their content appears across our ecosystem," adding that "people are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love."
Legal grounds and Precedents
Earlier in 2025, Disney, Warner Bros. Discovery, and Universal sued Midjourney for allowing users to produce images and videos of iconic copyrighted characters, much in the same way OpenAI now does.
OpenAI already faces legal action over copyright infringement claims, including a high-profile lawsuit featuring authors Ta-Nehisi Coates and Jodi Picoult, as well as newspapers like The New York Times.
OpenAI competitor Anthropic recently agreed to pay $1.5 billion to settle claims from authors who alleged the company illegally downloaded and used their books to train its AI models.
Disney, Warner Bros., and Sony Music Entertainment did not reply to requests for comment on OpenAI's Sora 2 release.
Technical capabilities and safety measures
Sora 2 represents what OpenAI calls "the GPT-3.5 moment for video," building on the original Sora model from February 2024.
The new model can generate Olympic gymnastics routines, backflips on paddleboards, and triple axels while a cat holds on, demonstrating improved physics simulation compared to earlier systems that would "morph objects and deform reality" to execute text prompts.
OpenAI includes visible moving watermarks on all videos and invisible metadata to indicate AI generation, though the company's own documentation acknowledges this metadata "is not a silver bullet to address issues of provenance" and "can easily be removed either accidentally or intentionally."
Siwei Lyu, a professor of computer science and director of the University of Buffalo's Media Forensic Lab and Center for Information Integrity, said that "invisible watermark and tracing tools can only be tested internally, so it is hard to judge how well they work at this point."
The Sora app launched with invite-only access in the United States and Canada, featuring a TikTok-style algorithmic feed for sharing AI-generated videos.
Read more: