Key takeaways
The artificial intelligence company reported 75,027 incidents to NCMEC's CyberTipline between January and June 2025, a stark jump from the 947 reports filed during the same period in 2024.
The CyberTipline serves as a Congressionally authorized national clearinghouse for reporting child sexual abuse material and other forms of child exploitation.
Companies are legally required to report apparent child exploitation to the system, which then forwards cases to appropriate law enforcement agencies for investigation.
Factors driving the surge in reports
OpenAI spokesperson Gaby Raila attributed the increase to multiple factors in a company statement.
The company made investments toward the end of 2024, "to increase our capacity to review and action reports in order to keep pace with current and future user growth," Raila said.
She added that the timeframe corresponds to "the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports."
The company's user base has expanded significantly. In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the number of weekly active users compared to the previous year.
The 75,027 reports OpenAI submitted covered 74,559 pieces of content, demonstrating a more comprehensive reporting approach where the number of reports roughly matched the amount of content reported.
OpenAI's reported content includes both uploads and requests for child sexual abuse material across all its products, from the consumer-facing ChatGPT application that generates text and images to API access for developers.
The figures do not include reports related to Sora, OpenAI's video-generation tool released in September, which fell outside the reporting period.
Broader industry crisis involving AI-generated content
The spike in OpenAI's reports reflects a broader crisis affecting the entire technology sector. NCMEC has documented explosive growth in reports involving generative artificial intelligence technology.
Between January and June 2025, the organization received 440,419 reports involving generative AI-related child sexual exploitation, compared to 6,835 during the same period in 2024.
John Shehan, senior vice president who oversees NCMEC's Exploited Children Division, warned about the accelerating threat.
"These alarming increases are a wake-up call," Shehan said in a statement. "NCMEC began tracking GAI in 2023, and the growth has been staggering. It's important that we stay on top of these emerging threats."
Generative AI technology has fundamentally changed how offenders exploit children online.
Previously, perpetrators needed to manipulate or trick children into sharing explicit images before attempting extortion.
Now, offenders can locate innocuous photos of children on social media, use generative AI platforms to create sexualized images, and immediately begin blackmailing victims.
The technology is also being used to create entirely synthetic child sexual abuse material and to simulate explicit conversations with children.
Understanding report statistics and context
Experts caution that increased report numbers can be nuanced.
Higher report volumes sometimes indicate changes in a platform's automated moderation systems or reporting criteria rather than necessarily signaling an increase in harmful activity.
A single piece of content can generate multiple reports if identified in different accounts or instances, while a single report may cover multiple pieces of content.
Statistics also do not capture supplemental reports OpenAI submits to NCMEC when providing additional information about particularly egregious child exploitation cases, including child sexual abuse material production and ongoing sexual abuse of children.
The surge in reports comes during a year of heightened scrutiny on AI companies' child safety practices.
In 2025, 44 state attorneys general sent a joint letter to multiple AI companies, warning they would use "every facet" of their authority to protect children from exploitation by artificial intelligence products.
Both OpenAI and competitor Character.AI have faced lawsuits from families alleging that chatbot interactions contributed to deaths involving minors.
In response to mounting concerns, OpenAI has rolled out new safety features, including parental controls that allow parents to link accounts with their teens and adjust settings such as disabling voice mode, blocking image creation, and opting out of model training.
Read more:
Italy Orders Meta To Suspend WhatsApp Terms Blocking Rival AI Chatbots
Google Cloud CEO Reveals Decade-Long AI Strategy Focused On Silicon And Energy Challenges
OpenAI Acknowledges AI Browsers May Never Fully Escape Prompt Injection Attacks