Hundreds of Prominent Figures Call for a Ban on AI Superintelligence Development

Key Points
- Over 700 high‑profile individuals sign a statement demanding a halt to AI superintelligence development.
- Signatories include leading AI researchers, former policymakers, and well‑known entertainers.
- The petition warns of threats to freedom, national security, and human survival without proper oversight.
- A Future of Life Institute poll shows only 5% support fast, unregulated AI progress.
- 64% of respondents say superintelligent AI should wait for proven safety, and 73% want strong regulation.
- Current signature count on the petition stands at 27,700.
- Calls echo earlier warnings from tech leaders like Elon Musk about the dangers of unchecked AI.
Over 700 high‑profile individuals, including AI pioneers and celebrities, have signed a statement demanding a halt to the creation of artificial superintelligence until it can be proven safe. The petition warns that unchecked AI could threaten freedom, national security, and even human survival. A recent poll shows a clear public desire for stricter regulation, with a majority opposing rapid, unregulated AI progress. The movement reflects growing unease about the pace of AI advances and calls for stronger oversight before further development.
Widespread Call for a Moratorium on Superintelligent AI
More than 700 prominent public figures have signed a statement urging a prohibition on the development of artificial superintelligence until robust safety measures and public consensus are in place. The signatories span a range of backgrounds, from leading AI researchers—often referred to as the "godfathers of AI"—to former policymakers and well‑known entertainers. Their collective message emphasizes that the creation of AI systems capable of outperforming humans across nearly all cognitive tasks, especially without adequate oversight, poses serious risks.
The petition highlights several core concerns: potential loss of individual freedoms, heightened national security threats, and the existential danger of human extinction. These worries echo earlier warnings from technology leaders such as Elon Musk, who has previously likened the rush toward advanced AI to "summoning a demon." The signatories argue that without clear, enforceable safeguards, the rapid pace of AI development could outstrip humanity’s ability to control it.
Public Opinion Mirrors the Call for Regulation
A recent national poll conducted by the Future of Life Institute reveals that public sentiment aligns closely with the petition’s stance. Only a small fraction—5%—of respondents support the current fast‑track, unregulated approach to AI advancement. In contrast, a substantial majority—64%—believe that superintelligent AI should not be pursued until its safety can be assured, and 73% demand robust regulatory frameworks to govern advanced AI technologies.
These figures underscore a growing demand for transparency and oversight in the AI sector, suggesting that both experts and the broader public are wary of unchecked progress.
Growing Momentum and Ongoing Signature Drive
The petition’s momentum continues to build, with the current signature count reported at 27,700. This expanding list of supporters reflects a rising collective anxiety about the trajectory of AI research and a desire for deliberate, cautious advancement. The signatories’ call to pause superintelligence development aims to foster a more measured approach, ensuring that future AI systems can be integrated safely and responsibly into society.
In summary, the coalition of scientists, policymakers, and cultural figures is urging a temporary halt to the pursuit of AI superintelligence until comprehensive safety protocols and broad public agreement are secured. Their appeal is bolstered by public polling that reveals widespread concern over the rapid, unregulated evolution of AI, highlighting the urgent need for stronger governance and oversight.