Key Takeaways
OpenAI and Broadcom announced a strategic multiyear collaboration on Monday to co-develop and deploy 10 gigawatts of custom artificial intelligence accelerators, marking one of the artificial intelligence industry's most ambitious infrastructure partnerships to date.
Under the agreement, OpenAI will design the AI accelerators and systems while Broadcom handles development and deployment.
The partnership aims to add substantial AI data center capacity, with rack deployments scheduled to begin in the second half of 2026 and continue through the end of 2029, according to a joint statement from both companies.
The announcement sent Broadcom shares soaring, with the stock climbing approximately 9% following the news. The partnership represents a significant validation of Broadcom's position in the rapidly expanding AI infrastructure market.
Custom chips to power next-generation AI
The collaboration allows OpenAI to design hardware tailored specifically to its needs, embedding insights gained from developing frontier AI models directly into the chips themselves.
This vertical integration strategy aims to improve performance while reducing the substantial costs associated with running large-scale AI operations.
"Our collaboration with Broadcom will power breakthroughs in AI and bring the technology's full potential closer to reality," said Greg Brockman, OpenAI co-founder and President. "By building our own chip, we can embed what we've learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence."
Hock Tan, President and CEO of Broadcom, emphasized the partnership's significance.
"OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI," he stated.
The systems will be scaled entirely using Ethernet and other connectivity solutions from Broadcom, with deployments planned across OpenAI's facilities and partner data centers to meet surging global demand for AI capabilities.
Part of broader infrastructure push
The Broadcom deal represents just one piece of OpenAI's massive computing infrastructure expansion.
The companies have been collaborating for approximately 18 months on developing chips optimized for AI inference—the computational process of responding to user queries.
OpenAI CEO Sam Altman characterized the partnership as providing "a gigantic amount of computing infrastructure to serve the needs of the world to use advanced intelligence."
He indicated that the custom chip approach will deliver "huge efficiency gains" leading to "much better performance, faster models, cheaper models."
According to industry estimates, a 1-gigawatt data center costs approximately $50 billion to build, with roughly $35 billion allocated to chips based on current pricing.
Reports suggest OpenAI's deal with Broadcom could cost between $350 billion and $500 billion over the partnership's duration.
Massive compute commitments across multiple vendors
OpenAI has announced approximately 33 gigawatts of computing commitments over recent weeks across partnerships with multiple semiconductor companies.
In September, the company committed to a 10-gigawatt order from Nvidia, followed by a 6-gigawatt agreement with AMD in recent weeks.
Combined with the Broadcom deal, these partnerships represent an unprecedented scale of AI infrastructure investment.
For context, OpenAI currently operates on just over 2 gigawatts of compute capacity—enough to scale ChatGPT to its current level, serving over 800 million weekly active users, develop and launch a video creation service, Sora, and conduct extensive AI research.
Altman suggested that even the massive 10-gigawatt commitment with Broadcom is only the beginning. "Even though it's vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price—the world will absorb it super fast and just find incredible new things to use it for," he explained.
Charlie Kawwas, Ph.D., President of the Semiconductor Solutions Group for Broadcom, noted that the partnership "continues to set new industry benchmarks for the design and deployment of open, scalable, and power-efficient AI clusters."
Read more: