Key Takeaways
Microsoft has launched an ambitious effort to develop advanced artificial intelligence systems independent of its longtime partner OpenAI, establishing a new team dedicated to what the company calls "humanist superintelligence" that prioritizes human safety and control.
The initiative, announced Thursday by Microsoft AI CEO Mustafa Suleyman, marks a significant strategic shift for the tech giant following a renegotiated partnership agreement with OpenAI that now allows Microsoft to pursue its own path toward building highly advanced AI systems.
The newly formed MAI Superintelligence Team will be led by Suleyman alongside Microsoft AI Chief Scientist Karén Simonyan.
Microsoft gains freedom to pursue AI research independently
The new effort became possible after Microsoft and OpenAI signed a revised partnership agreement on October 28, 2025, that fundamentally restructured their relationship.
Under the new terms, Microsoft secured a 27% ownership stake in OpenAI valued at approximately $135 billion following OpenAI's conversion to a public benefit corporation.
Critically, the agreement removed previous restrictions that barred Microsoft from independently pursuing artificial general intelligence, the hypothetical benchmark where AI systems could match or exceed human-level performance across a broad range of tasks.
Microsoft had been contractually prevented from such research under the original 2019 partnership deal.
"We have a best-of-both environment, where we're free to pursue our own superintelligence and also work closely with them," Suleyman told Fortune in an interview this week.
The restructured partnership maintains Microsoft's access to OpenAI's technology through 2032, including any models that achieve AGI status. However, OpenAI can now work with other cloud providers and jointly develop products with third parties, while Microsoft gains the explicit freedom to develop its own foundational AI models.
Focus on 'humanist' approach contrasts with industry rivals
Microsoft's vision for superintelligence deliberately contrasts with approaches taken by competitors, including OpenAI, Meta, Google, and Anthropic.
While those companies emphasize rapid advancement toward more powerful AI systems, Suleyman outlined a philosophy centered on ensuring AI remains under human control.
"The project of superintelligence has to be about designing an AI which is subservient to humans, and one that keeps humans at the top of the food chain," Suleyman told Axios. "It's remarkable that such a statement even needs to be made."
In his blog post announcing the initiative, Suleyman wrote that the team would pursue what he termed "Humanist Superintelligence" rather than "an unbounded and unlimited entity with high degrees of autonomy."
"We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable," Suleyman wrote. "We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity."
The Microsoft AI chief has been particularly vocal about rejecting the anthropomorphization of AI systems.
Speaking to The Wall Street Journal, Suleyman criticized the notion of treating AI as though it possesses human-like feelings or consciousness.
"AI is going to become more human-like, but it won't have the property of experiencing suffering or pain itself, and therefore we shouldn't over-empathize with it," Suleyman told the Journal.
In a separate interview with CNBC, he stated: "Our physical experience of pain is something that makes us very sad and feel terrible, but the AI doesn't feel sad when it experiences 'pain.' It's really just creating the perception, the seeming narrative of experience and of itself and of consciousness, but that is not what it's actually experiencing."
Practical applications in healthcare and clean energy
Rather than pursuing superintelligence as an abstract technological goal, Microsoft's team will focus on specific practical applications. Suleyman outlined three primary areas of focus in his announcement.
The initiative will develop AI companions designed to assist with education and other domains, providing what Suleyman described as expert-level support for users.
In healthcare, the team aims to achieve "expert-level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings." The effort will also pursue advances in renewable energy production and materials science.
These focus areas build on Microsoft's previous AI work, including the development of MAI-DxO, a medical diagnostic system introduced in July 2025 that reportedly solved 85.5% of test cases compared to 20% accuracy by experienced physicians.
Speaking to reporters about Microsoft's broader AI strategy following the OpenAI partnership restructuring, Suleyman emphasized the company's focus on "software tools for the workplace, healthcare diagnostics, and to play a part in the science of developing sources of clean, renewable energy."
Balancing safety with competitive pressure
Suleyman acknowledged the tension between prioritizing safety measures and competing in a rapidly evolving AI world where other companies may move faster with fewer guardrails.
The challenge comes as the regulatory environment shifts away from emphasizing AI safety concerns.
"It's a difficult one to manage," Suleyman told Axios when asked about the tradeoffs between safety and performance. He noted that certain advanced techniques like recursive self-improvement, where AI systems learn to enhance their own capabilities, bring both risks and opportunities for rapid advancement.
"The reality is that performance gains will come from recursive self-improvement, and we are also pursuing RSI, and we should," Suleyman said.
However, he emphasized that Microsoft would not pursue advancement "at any cost," telling Semafor: "We cannot just accelerate at all costs. That would just be a crazy suicide mission."
The announcement positions Microsoft as taking a measured approach to superintelligence development while major competitors pursue similar ambitious goals.
Meta launched its own Meta Superintelligence Labs in July 2025, though the unit was restructured just 50 days later amid personnel departures and organizational changes.
Former OpenAI chief scientist Ilya Sutskever founded Safe Superintelligence Inc. in 2024 with a similar focus on controllable advanced AI.
Suleyman, who co-founded AI research lab DeepMind before it was acquired by Google in 2014, joined Microsoft in March 2024 after leading AI startup Inflection AI.
His appointment signaled Microsoft's intention to develop greater independence in AI capabilities beyond its OpenAI partnership.
The MAI Superintelligence Team will require significant investments in computing infrastructure and AI chips to train advanced models, though Suleyman declined to specify the size of the team's resources.
Read more: