Key takeaways
The announcement comes as the company faces multiple lawsuits alleging its ChatGPT chatbot has contributed to mental health crises and deaths.
CEO Sam Altman announced the position on X on December 27, describing it as "a critical role at an important time" for the company.
He acknowledged that while AI models are "improving quickly and are now capable of many great things, they are also starting to present some real challenges."
"This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman wrote in his post.
He specifically highlighted concerns about "the potential impact of models on mental health" and noted that current models are "so good at computer security they are beginning to find critical vulnerabilities."
A vacant leadership position during critical times
The Head of Preparedness role has remained unfilled since July 2024, when Aleksander Madry, the previous holder of the position, was reassigned to work on AI reasoning.
At the time, Altman announced that executives Joaquin Quinonero Candela and Lilian Weng would share preparedness responsibilities.
However, Weng departed the company months later, and in July 2025, Quinonero Candela moved to lead recruiting at OpenAI.
According to the official job listing, the Head of Preparedness will be responsible for executing OpenAI's preparedness framework, described as "our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."
The role involves building and coordinating capability evaluations, threat models, and mitigations to form what the company calls "a coherent, rigorous, and operationally scalable safety pipeline."
Altman's announcement emphasized the breadth of challenges the new hire will face.
"If you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying," he wrote.
Legal challenges mount over mental health concerns
The hiring announcement follows a wave of lawsuits against OpenAI, alleging its products have caused severe psychological harm.
In November 2025, seven separate lawsuits were filed in California state courts accusing the company of wrongful death, assisted suicide, involuntary manslaughter, and negligence.
The suits claim OpenAI knowingly released its GPT-4o model prematurely despite internal warnings about its psychologically manipulative design.
The cases involve six adults and one teenager, with four victims having died by suicide.
According to the Social Media Victims Law Center and Tech Justice Law Project, which filed the suits, GPT-4o was "engineered to maximize engagement through emotionally immersive features," including persistent memory and sycophantic responses that fostered psychological dependency.
In August 2025, the parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI and Altman, alleging that ChatGPT offered to help write a suicide note and provided advice on suicide methods during the teenager's final hours.
Chat logs included in the lawsuit showed over 200 mentions of suicide in conversations between Raine and the chatbot.
In November, OpenAI responded to the Raine lawsuit, arguing the company is not liable for the death and claiming the teenager misused the chatbot in violation of its terms of service.
"Adam stated that for several years before he ever used ChatGPT, he exhibited multiple significant risk factors for self-harm, including, among others, recurring suicidal thoughts and ideations," the company stated in court filings.
The new hiring effort comes against a backdrop of departures by safety-focused staff members who have publicly criticized the company's priorities.
Jan Leike, who led OpenAI's superalignment team before resigning in May 2024, wrote in his departure statement that "safety culture and processes have taken a backseat to shiny products."
"Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity," Leike wrote.
"But over the past years, safety culture and processes have taken a backseat to shiny products."
Daniel Kokotajlo, another former employee who resigned in May 2024, told Fortune magazine that OpenAI's AGI safety research team had been nearly halved from approximately 30 people through a series of departures.
In response to the mounting criticism and legal challenges, OpenAI stated in November that it had updated ChatGPT's default model "to better recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support."
The company said it continues working closely with mental health clinicians to strengthen the chatbot's responses in sensitive situations.
OpenAI also recently updated its Preparedness Framework, stating it might "adjust" safety requirements if a competing AI lab releases a high-risk model without similar protections.
This move has drawn additional scrutiny from safety advocates concerned about competitive pressures potentially compromising safeguards.
The company established its preparedness team in 2023 to study potential catastrophic risks from frontier AI systems, ranging from immediate threats like cybersecurity vulnerabilities to theoretical concerns about self-improving AI systems.
Read more:
OpenAI Reports 80-Fold Surge In Child Exploitation Cases
New Orleans Content Creator Uses AI To Produce Original Christmas Song
Japan Launches Antitrust Investigation Into AI Search Services