Key Takeaways
More than 200 prominent politicians, scientists, and Nobel Prize winners launched an unprecedented global campaign Monday calling for binding international restrictions on artificial intelligence, warning that "AI's current trajectory presents unprecedented dangers" that require immediate government intervention.
The initiative, dubbed the "Global Call for AI Red Lines," was announced at the opening of the United Nations General Assembly's High-Level Week by Nobel Peace Prize laureate Maria Ressa. The coalition is demanding that world governments reach an international agreement on clear and verifiable AI restrictions by the end of 2026.
High-profile coalition unites against AI risks
The letter brings together an extraordinary coalition of influential voices, including 10 Nobel Prize winners and many leading artificial intelligence researchers.
Among the signatories are celebrated authors Stephen Fry and Yuval Noah Harari, former heads of state including former Irish President Mary Robinson and former Colombian President Juan Manuel Santos, who won the Nobel Peace Prize in 2016.
Notably, Geoffrey Hinton and Yoshua Bengio, recipients of the prestigious Turing Award and two of the three so-called "godfathers of AI," also signed the open letter.
Hinton famously left his prestigious position at Google two years ago to raise awareness about the dangers of unchecked AI development.
"For thousands of years, humans have learned — sometimes the hard way — that powerful technologies can have dangerous as well as beneficial consequences," said Harari. "Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity."
Specific prohibitions proposed
While the letter stops short of providing concrete recommendations, stating that government officials and scientists must negotiate where red lines fall to secure international consensus, it does offer suggestions for some limits.
These include prohibiting lethal autonomous weapons, autonomous replication of AI systems and the use of AI in nuclear warfare.
"It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly," said Ahmet Üzümcü, former director general of the Organization for the Prohibition of Chemical Weapons, which was awarded the 2013 Nobel Peace Prize during Üzümcü's tenure.
The statement points to successful international agreements in other dangerous arenas as precedent, including prohibitions on biological weapons or ozone-depleting chlorofluorocarbons.
Corporate voluntary measures deemed insufficient
The letter comes amid growing criticism of voluntary AI safety measures adopted by major technology companies.
Recent research has shown that on average, those companies are fulfilling only about half of those voluntary commitments, leading global leaders to accuse them of prioritizing profit and technical progress over societal welfare.
Leading American AI companies have signed various safety-focused agreements, including commitments with the White House in July 2023 and the Frontier AI Safety Commitments at the Seoul AI Summit in May 2024.
However, many observers have questioned the effectiveness and limitations of such voluntary collaboration.
Timing and global coordination
The letter's release coincides with the beginning of the U.N. General Assembly's High-Level Week, during which heads of state and government gather in New York City to debate policy priorities.
The timing is strategic, as the U.N. will launch its first diplomatic AI body Thursday in an event headlined by Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres.
The Global Call for AI Red Lines is organized by a trio of nonprofit organizations: the Center for Human-Compatible AI based at the University of California Berkeley, The Future Society, and the French Center for AI Safety.
Over 60 civil society organizations from around the world have also endorsed the letter, from the Demos think tank in the United Kingdom to the Beijing Institute of AI Safety and Governance.
Escalating AI concerns
The initiative represents an escalation of previous efforts to address AI risks. In March 2023, more than 1,000 technology researchers and leaders, including Elon Musk, called for a pause in the development of powerful AI systems.
Two months later, leaders of prominent AI labs, including OpenAI's Sam Altman, Anthropic's Dario Amodei and Google DeepMind's Demis Hassabis, signed a one-sentence statement advocating for treating AI's existential risk to humanity as seriously as threats posed by nuclear war and pandemics.
Notably, Altman, Amodei and Hassabis did not sign the latest letter, though prominent AI researchers like OpenAI co-founder Wojciech Zaremba and DeepMind scientist Ian Goodfellow did.
Read more: