Over 800 Public Figures Demand Ban on AI Superintelligence Development

Key Points
- Over 800 public figures, including Steve Wozniak and Prince Harry, signed a statement demanding a ban on AI superintelligence development.
- The appeal was issued by the Future of Life Institute, calling for a prohibition until there is broad scientific consensus on safety and public buy‑in.
- Signers span diverse sectors, featuring Geoffrey Hinton, Steve Bannon, Mike Mullen and Will.i.am among others.
- The group warns that AI progress is outpacing public understanding and poses grave risks to humanity.
- Leading AI executives, such as Mark Zuckerberg, Elon Musk and Sam Altman, have expressed optimism about superintelligence but did not sign the statement.
- Earlier, more than 200 researchers and officials called for a “red line” on AI risks related to unemployment, climate change and human‑rights concerns.
More than 800 public figures—including technology pioneer Steve Wozniak, Prince Harry, AI researchers, former military leaders and CEOs—have signed a statement urging a prohibition on work that could lead to artificial superintelligence. The appeal, issued by the Future of Life Institute, calls for a ban until there is broad scientific consensus that such systems can be built safely, controllably and with strong public support. Signers span a wide political and professional spectrum, featuring names such as Geoffrey Hinton, Steve Bannon, Mike Mullen and Will.i.am. The group warns that AI progress is outpacing public understanding and that unchecked development poses grave risks to humanity.
Broad Coalition Calls for a Moratorium on Superintelligent AI
In a coordinated effort reported by the Financial Times, more than 800 public figures have signed a statement demanding a ban on the development of artificial superintelligence. The signatories include a diverse mix of individuals from technology, entertainment, military and political backgrounds. Notable names mentioned are Steve Wozniak, Prince Harry, AI researcher and Nobel laureate Geoffrey Hinton, former Trump aide Steve Bannon, former Joint Chiefs of Staff Chairman Mike Mullen and rapper Will.i.am.
The appeal originates from the Future of Life Institute, an organization that has warned that AI advancements are moving faster than the public can comprehend. Executive director Anthony Aguirre emphasized that the public has not been asked whether the current trajectory of AI development aligns with societal desires.
Specific Demands and Rationale
The statement calls for a prohibition on the creation of superintelligent systems until a broad scientific consensus confirms that such technology can be built safely, controllably and with strong public buy‑in. The signers argue that artificial general intelligence (AGI) — the ability of machines to reason and perform tasks at human level — and superintelligence — performance surpassing even the best human experts — represent grave risks to humanity if not properly managed.
Industry Context and Reactions
Despite the call for a moratorium, leading AI companies continue to invest heavily in new models and the infrastructure needed to run them. The statement notes that firms such as OpenAI are pouring billions into research and data centers. High‑profile tech leaders have expressed optimism about the timeline for superintelligence: Meta CEO Mark Zuckerberg described it as “in sight,” X (formerly Twitter) CEO Elon Musk said it “is happening in real time,” and OpenAI CEO Sam Altman expects it could arrive by 2030 at the latest. None of these leaders signed the statement.
Parallel Calls for Regulation
The initiative follows earlier appeals from the AI community. More than 200 researchers and public officials, including ten Nobel Prize winners, released an urgent call for a “red line” against AI risks, focusing on immediate concerns such as mass unemployment, climate impacts and human‑rights abuses. While that letter addressed existing challenges rather than superintelligence, both efforts underscore growing unease about the pace and direction of AI development.
Implications and Next Steps
The coalition’s demand highlights a widening gap between rapid AI advancements and societal readiness to address their consequences. By urging a pause until safety and governance frameworks are solidified, the signatories aim to ensure that future AI systems align with public values and do not pose uncontrolled threats. The statement adds pressure on policymakers, industry leaders and the broader public to engage in a transparent dialogue about the future of intelligent machines.