Key takeaways
Speaking at the Fortune Brainstorm AI conference in San Francisco, Kurian outlined how Google anticipated the two biggest challenges now facing the industry: the need for specialized computing chips and the looming energy crisis.
"We've worked on TPUs since 2014 … a long time before AI was fashionable," Kurian said, referring to Google's custom Tensor Processing Units.
The decision to invest early was driven by a fundamental belief that chip architecture could be radically redesigned to accelerate machine learning.
Energy emerges as the critical bottleneck
While much of the tech industry focused on processing speed and computational power, Google was calculating a different problem: the electrical cost of running AI systems at scale. According to Kurian, energy constraints have become even more critical than silicon shortages.
"We also knew that the most problematic thing that was going to happen was going to be energy because energy and data centers were going to become a bottleneck alongside chips," Kurian told Fortune's Andrew Nusca at the conference.
The International Energy Agency estimates that some AI-focused data centers consume as much electricity as 100,000 homes, with the largest facilities under construction potentially using 20 times that amount.
Kurian emphasized that the challenge extends beyond simply sourcing more power—certain energy sources cannot handle the extreme demands of AI training workloads.
"If you're running a cluster for training … the spike that you have with that computation draws so much energy that you can't handle that from some forms of energy production," he explained.
In response, Google has implemented a three-pronged energy strategy: diversifying energy sources to ensure compatibility with AI's unique demands, maximizing efficiency by using AI to manage thermodynamic exchanges within data centers, and developing fundamental technologies to create energy in new forms.
In what Kurian described as recursive innovation, "the control systems that monitor the thermodynamics in our data centers are all governed by our AI platform."
Strategic partnership addresses infrastructure needs
Google's energy concerns have translated into concrete action. In December, the company announced a significant expansion of its partnership with NextEra Energy to develop multiple gigawatt-scale data center campuses across the United States.
The companies currently have approximately 3.5 gigawatts in operation or contracted and are developing their first three campuses with plans to identify additional locations.
"Our partnership with Google exemplifies this very singular moment when energy and technology are becoming inextricably intertwined," said NextEra Energy chairman and CEO John Ketchum in a statement announcing the collaboration.
"Together, we intend to build data center capacity and energy infrastructure at scale, advance cutting-edge technology, and reimagine how energy companies operate."
The partnership includes plans to restart the Duane Arnold nuclear power plant in Iowa under a 25-year power purchase agreement, along with new long-term contracts adding 600 megawatts of clean energy capacity to Oklahoma's electricity grid.
NextEra has set a target of building 15 gigawatts of new power generation for data center hubs by 2035, which CEO Ketchum described as "fairly conservative."
Collaboration over competition in the chip market
Despite Google's substantial investment in developing its own TPU chips, Kurian pushed back against the narrative that custom silicon threatens established players like Nvidia.
He characterized the relationship as a partnership rather than a rivalry, noting that Google optimizes its Gemini models for Nvidia GPUs and recently collaborated to allow Gemini to run on Nvidia clusters while protecting Google's intellectual property.
"For those of us who have been working on AI infrastructure, there are many different kinds of chips and systems that are optimized for many different kinds of models," Kurian said.
He added that as the market grows, "we're creating opportunity for everybody."
Kurian attributed Google Cloud's position as what he called the "fastest growing" major cloud provider to its ability to offer a complete technology stack spanning energy, chips, infrastructure, models, tools, and applications.
However, he emphasized that this vertical integration doesn't create a closed system.
Citing data showing 95% of large companies use cloud technology from multiple providers, Kurian said Google's strategy allows customers to mix and match technologies, using Google's TPUs or Nvidia's GPUs, and Google's Gemini models alongside those from other providers.
Reality check for enterprise AI adoption
Despite the advanced infrastructure available, Kurian offered a cautionary note for businesses rushing to implement AI solutions.
He identified three primary reasons why enterprise AI projects fail: poor architectural design, "dirty" data, and inadequate testing for security and model compromise.
Additionally, many organizations struggle simply because "they didn't think about how to measure the return on investment on it."
The comments come as Google Cloud continues to expand its AI capabilities.
In April 2025, the company unveiled Ironwood, its seventh-generation TPU, which achieves 3,600 times better performance than its first publicly available TPU and is 29 times more energy efficient.
Read more:
OpenAI Acknowledges AI Browsers May Never Fully Escape Prompt Injection Attacks
Air Force Shuts Down NIPRGPT AI Chatbot As Pentagon Transitions To New Platform
Michigan Experts Warn Against Relying On AI During Mental Health Crises