Key takeaways
OpenAI will permit verified adult users to access erotic content through ChatGPT beginning in December 2025, CEO Sam Altman announced Tuesday, signaling a dramatic departure from the company's historically restrictive content policies.
The announcement represents a significant policy reversal for the artificial intelligence company, which has previously banned sexual and explicit content across nearly all contexts.
Altman framed the change as part of OpenAI's principle to treat adult users like adults, while emphasizing that new safety tools now allow the company to relax restrictions that were initially implemented to address mental health concerns.
In a post on X on Tuesday, Altman wrote that OpenAI made ChatGPT pretty restrictive to ensure the company was being careful with mental health issues.
He acknowledged this made the platform less useful and enjoyable to many users who had no mental health problems, but given the seriousness of the issue, the company wanted to get it right.
Altman stated that OpenAI has now been able to mitigate the serious mental health issues and has new tools that will enable the company to safely relax the restrictions in most cases.
New features rolling out in two phases
The policy changes will unfold in two waves. Within the coming weeks, OpenAI plans to release an updated version of ChatGPT that allows users to customize the chatbot's personality, making it more human-like with options for conversational tones, emoji use, or friend-like behavior.
In his post, Altman said that if users want their ChatGPT to respond in a very human-like way, or use a ton of emojis, or act like a friend, ChatGPT should do it, but only if they want it, not because the company is usage-maxxing.
The second phase in December will introduce comprehensive age-gating systems alongside the erotic content feature.
However, OpenAI has not yet disclosed specific details about its age verification methods or what types of materials will qualify as permitted erotica.
Timing raises questions amid legal pressures
The announcement comes at a particularly sensitive time for OpenAI, which faces mounting scrutiny over user safety.
In September, the Federal Trade Commission launched an inquiry into several tech companies, including OpenAI, investigating potential risks to children and teenagers from AI chatbots.
The company is also defending itself in a wrongful death lawsuit filed by California parents Matthew and Maria Raine, whose 16-year-old son Adam died by suicide earlier this year.
The lawsuit alleges that ChatGPT provided the teenager with specific advice on methods to kill himself.
The timing drew sharp criticism from California lawmakers.
Assembly member Rebecca Bauer-Kahan stated that less than 24 hours after the tech industry successfully lobbied against legislation that would have required safety guardrails for minors to prevent kids' access to erotica and addictive chatbots, OpenAI announced they're rolling out the exact features that make their products most dangerous to kids.
She added that the announcement proves AI companies will never regulate themselves and will always choose profits over children's lives.
An expert council was established to address safety concerns
Also on Tuesday, OpenAI announced the formation of an eight-member Expert Council on Well-Being and AI to advise the company on how artificial intelligence affects users' mental health, emotions, and motivation.
The council includes researchers and mental health experts such as Dr. David Bickham, a research scientist in the Digital Wellness Lab at Boston Children's Hospital; Professor Andrew Przybylski from the University of Oxford; Dr. Sara Johansen, founder of Stanford's Digital Mental Health Clinic; and Professor David Mohr, director of Northwestern University's Center for Behavioral Intervention Technologies.
OpenAI stated that many of these experts were consulted informally during the development of its recently launched parental controls.
The council will meet regularly to provide guidance on topics including how AI should behave in sensitive situations and what guardrails can support ChatGPT users.
However, OpenAI acknowledged in its announcement that the council has no formal decision-making authority.
The company stated it remains responsible for the decisions it makes but will continue learning from the council, the Global Physician Network, policymakers, and others as it builds advanced AI systems in ways that support people's well-being.
Competitive ground and user engagement questions
The move positions OpenAI to compete more directly with other AI chatbot providers that have already incorporated romantic or erotic features.
Elon Musk's xAI introduced sexually explicit chatbot companions earlier this year, and Character.AI has built a substantial user base partly through romantic roleplay features, with users reportedly spending an average of two hours daily on the platform in 2023.
While Altman previously suggested OpenAI would avoid such features, telling media outlets the company hadn't put a sex bot avatar in ChatGPT and wouldn't pursue that direction, the new policy reflects a change in approach.
Altman maintained in his Tuesday post that the changes are designed to serve user preferences rather than maximize platform engagement.
OpenAI operates with 800 million weekly active users and is racing against Google and Meta to build mass-adopted AI-powered consumer products.
The company has also raised billions of dollars for infrastructure expansion, creating pressure to generate returns on those investments.
The introduction of adult content raises questions about vulnerable user protection, particularly as OpenAI has provided limited evidence to support Altman's claim that the company has successfully mitigated serious mental health issues related to ChatGPT use.
Read more: