Key takeaways:
Prince Harry and Meghan Markle have added their voices to an unprecedented global coalition demanding a halt to the development of artificial intelligence systems that could surpass human capabilities, joining more than 800 scientists, policymakers, and public figures in what organizers describe as a mainstream awakening to AI risks.
The statement, released Wednesday by the nonprofit Future of Life Institute, calls for an immediate prohibition on developing superintelligent AI until there is widespread scientific agreement that such systems can be built safely and controlled reliably, along with strong public support for their development.
Diverse coalition spans political spectrum
The 30-word statement represents one of the most politically diverse groups ever assembled around AI concerns. Signatories include AI pioneers Geoffrey Hinton, a Nobel laureate, and Yoshua Bengio, often called the godfather of modern AI.
They are joined by Apple co-founder Steve Wozniak, economist Daron Acemoglu, former National Security Adviser Susan Rice, conservative commentators Steve Bannon and Glenn Beck, entrepreneur Richard Branson, rapper will.i.am, and actor Joseph Gordon-Levitt.
"We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in," the statement reads.
In a personal note accompanying his signature, Prince Harry wrote: "The future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance."
Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley, clarified that "this is not a ban or even a moratorium in the usual sense," suggesting the statement seeks to establish conditions for safe development rather than halt all AI research.
Target: tech giants racing toward superintelligence
The statement directly addresses major technology companies, including Google, OpenAI, and Meta Platforms, which are investing hundreds of billions of dollars annually in developing increasingly powerful AI systems.
Several tech executives have publicly stated goals of achieving superintelligence within the next decade.
Meta CEO Mark Zuckerberg said in July that superintelligence was "now in sight." OpenAI CEO Sam Altman stated last month that he would be surprised if superintelligence didn't arrive by 2030.
The companies targeted by the statement did not immediately respond to requests for comment.
Max Tegmark, president of the Future of Life Institute and a Massachusetts Institute of Technology professor, explained the coalition's unusual breadth: "What unites all of these people across the political spectrum is that they're actually all humans, not machines, and therefore really care very deeply about that the future should be one where the machines work for us, not some kind of new digital overlords."
Tegmark added: "I really empathize with them, frankly, because they're so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy. I think that's why it's so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in."
Growing public concern over AI development
The statement accompanies polling commissioned by the Future of Life Institute showing that 73 percent of U.S. adults want robust regulation on advanced AI development.
The survey of 2,000 adults conducted in September found that 64 percent want an immediate pause on advanced AI development.
Other recent polls have shown bipartisan support for AI oversight. A Gallup poll found that 88 percent of Democrats and 79 percent of Republicans and independents favored maintaining rules around AI for safety and security.
The statement's preamble notes that while AI tools may bring health and prosperity, "many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction."
Questions about the path forward
Anthony Aguirre, Executive Director of the Future of Life Institute, emphasized that the statement aims to force a broader conversation about technology development choices.
"We've, at some level, had this path chosen for us by the AI companies and founders and the economic system that's driving them, but no one's really asked almost anybody else, 'Is this what we want?'" he said in an interview with NBC News.
"It's been quite surprising to me that there has been less outright discussion of 'Do we want these things? Do we want human-replacing AI systems? Aguirre continued. "It's kind of taken as: Well, this is where it's going, so buckle up, and we'll just have to deal with the consequences. But I don't think that's how it actually is. We have many choices as to how we develop technologies, including this one."
The statement isn't aimed at any specific organization or government but seeks to engage major AI companies along with politicians in the United States, China, and other nations in determining the future direction of AI development.
The Future of Life Institute previously organized a March 2023 letter calling for a six-month pause on training AI models more powerful than GPT-4.
Read more: