Over 200 prominent figures, including former heads of state, diplomats, Nobel laureates, and leaders in artificial intelligence (AI) and science, convened on Monday to advocate for the establishment of international “red lines” governing AI behavior. These would include prohibitions against AI impersonating humans or self-replicating.
This collective, along with more than 70 organizations focused on AI, has endorsed the Global Call for AI Red Lines initiative, urging governments to finalize an international political agreement on these boundaries by the conclusion of 2026. Notable signatories include Geoffrey Hinton, a prominent computer scientist, Wojciech Zaremba, co-founder of OpenAI, Jason Clinton from Anthropic, and Ian Goodfellow of Google DeepMind.
“The aim is not to respond after a catastrophe occurs, but to avert large-scale, potentially irreversible dangers beforehand,” explained Charbel-Raphaël Segerie, the executive director of the French Center for AI Safety (CeSIA), during a briefing with journalists on Monday.
He emphasized, “If countries cannot yet come to a consensus on how to manage AI, they should at least reach an agreement on what AI must never be allowed to do.”
This declaration precedes the high-level week of the 80th United Nations General Assembly in New York and is spearheaded by CeSIA, the Future Society, and the Center for Human-Compatible Artificial Intelligence at UC Berkeley.
Nobel Peace Prize recipient Maria Ressa referenced the initiative during her opening remarks at the assembly, underscoring the need to “eliminate Big Tech’s impunity through global accountability.”
There are existing regional guidelines concerning AI. For instance, the European Union’s AI Act prohibits certain uses of AI deemed “unacceptable.” Furthermore, an agreement between the US and China stipulates that nuclear weapons should remain under human control, not AI oversight. However, a global consensus has yet to be reached.
In the long run, more stringent measures are necessary than mere “voluntary pledges,” according to Niki Iliadis, the director of global governance for AI at The Future Society. Enforcement of responsible scaling policies established by AI companies has proven inadequate. She advocates for the creation of an independent global body equipped with enforcement capabilities to clearly define and monitor these red lines.
“Companies can delay developing AGI until they determine how to make it safe,” suggested Stuart Russell, a UC Berkeley professor of computer science and a well-regarded AI researcher. “Just as those in the nuclear power sector hesitated to build plants until they understood how to prevent catastrophic failures, the AI industry must pursue a safer technological path from the outset, and we need assurance that they are doing so.”
Russell contended that red lines do not stifle economic growth or innovation, countering claims made by some critics of AI regulation. “You can advance AI for economic purposes without resorting to AGI that we cannot control,” he stated. “The idea that one must accept potentially world-ending AGI in order to achieve things like medical breakthroughs is simply flawed.”