Key Takeaways
OpenAI has entered into a seven-year, $38 billion partnership with Amazon Web Services, the companies announced Monday, marking a significant shift in the artificial intelligence company's cloud computing strategy and its first major agreement with the world's leading cloud infrastructure provider.
The deal grants OpenAI immediate access to AWS compute resources, including hundreds of thousands of state-of-the-art Nvidia graphics processing units housed in Amazon EC2 UltraServers.
The partnership enables OpenAI to scale its infrastructure through 2026 and beyond, supporting both ChatGPT operations and the development of next-generation AI models.
Strategic diversification from Microsoft
The AWS partnership represents OpenAI's most substantial move away from Microsoft, which served as the company's exclusive cloud provider until earlier this year.
Microsoft has invested approximately $13 billion in OpenAI since 2019, but the arrangement shifted in January when Microsoft's exclusive rights ended.
Last week, Microsoft's preferential status expired under newly negotiated commercial terms, allowing OpenAI greater freedom to partner with other cloud providers.
Despite the new AWS deal, OpenAI confirmed it will continue spending heavily with Microsoft, committing to purchase an additional $250 billion worth of Azure services.
The company has also struck cloud agreements with Oracle and Google, signaling a deliberate multi-cloud strategy to meet its massive computational demands.
"Scaling frontier AI requires massive, reliable compute," said Sam Altman, co-founder and CEO of OpenAI. "Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone."
Matt Garman, CEO of AWS, emphasized the significance of the partnership in a statement: "As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads."
Infrastructure deployment and technical specifications
Under the agreement, AWS will provide OpenAI with compute capacity comprising hundreds of thousands of Nvidia GPUs, including the latest GB200 and GB300 models, with the ability to expand to tens of millions of CPUs for rapidly scaling agentic workloads.
All compute capacity is expected to be deployed by the end of 2026, with options for further expansion in 2027 and beyond.
The infrastructure features a sophisticated architectural design optimized for AI processing efficiency.
The Nvidia GPUs are clustered via Amazon EC2 UltraServers on the same network, enabling low-latency performance across interconnected systems.
The setup is designed to support various workloads, from serving inference for ChatGPT to training next-generation models.
OpenAI began utilizing AWS infrastructure immediately following the announcement.
The partnership builds on existing collaboration between the companies. Earlier this year, OpenAI's open-weight foundation models became available on Amazon Bedrock, AWS's managed service for accessing AI systems.
Thousands of customers, including Peloton, Thomson Reuters, and Comscore, already use OpenAI models on AWS for tasks ranging from coding and scientific analysis to agentic workflows.
Market reaction and broader implications
Amazon shares closed 4% higher on Monday at $256.01, marking a record closing high for the stock. The two-day gain of 14% represented Amazon's best two-day period since November 2022, adding roughly $140 billion to the company's market capitalization.
For Amazon, the partnership represents both validation of its AI infrastructure investments and a competitive edge against rivals Microsoft Azure and Google Cloud.
The deal is particularly significant given AWS's close relationship with OpenAI competitor Anthropic.
Amazon has invested billions in Anthropic and is currently constructing an $11 billion data center campus in New Carlisle, Indiana, designed exclusively for Anthropic workloads.
The AWS agreement is the latest in OpenAI's aggressive infrastructure expansion strategy.
The company has announced approximately $1.4 trillion worth of buildout agreements with partners, including Nvidia, Broadcom, Oracle, and Google, over recent months.
In September, OpenAI signed a letter of intent to deploy at least 10 gigawatts of Nvidia hardware in its data centers.
Some industry analysts have expressed concern about the scale of AI infrastructure spending.
OpenAI's cumulative commitments exceed $1 trillion, raising questions about potential overcapacity and whether sufficient electrical power and resources exist to fulfill these ambitious promises.
The company remains unprofitable despite its rapid growth, with Microsoft's latest filings reporting a $12 billion quarterly loss attributed to OpenAI.
For OpenAI, the multi-cloud approach and long-term capacity agreements signal operational maturity as the company prepares for a potential public offering.
Altman acknowledged in a recent livestream that an IPO is "the most likely path" given OpenAI's capital needs
. CFO Sarah Friar has framed the recent corporate restructuring—which freed OpenAI from its non-profit framework last week—as a necessary step toward going public.
The partnership announcement came during Amazon's third-quarter earnings period.
Read more: