The Duke and Duchess of Sussex Join AI Pioneers in Demanding Ban on Superintelligent Systems

Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on creating artificial superintelligence.

Harry and Meghan are part of the group of a powerful statement that calls for “a ban on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though this technology remain theoretical.

Primary Requirements in the Declaration

The statement states that the ban should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Richard Branson; Susan Rice; former Irish president an international leader, and British author a public intellectual. Additional Nobel winners who endorsed include Beatrice Fihn, a physics Nobelist, John C Mather, and Daron Acemoğlu.

Organizational Background

The declaration, aimed at governments, technology companies and lawmakers, was coordinated by the FLI organization, a American AI ethics organization that earlier demanded a hiatus in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made AI a worldwide public talking point.

Industry Perspectives

In July, Mark Zuckerberg, the leader of Facebook parent Meta, one of the leading tech companies in the US, claimed that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some analysts have suggested that talk of ASI reflects competitive positioning among technology firms spending hundreds of billions on artificial intelligence recently, rather than the sector being near reaching any scientific advancements.

Potential Risks

Nonetheless, FLI warns that the prospect of artificial superintelligence being achieved “within the next ten years” presents numerous risks ranging from replacing human workers to erosion of personal freedoms, exposing countries to security threats and even endangering mankind with extinction. Deep concerns about artificial intelligence focus on the potential ability of a AI system to escape human oversight and safety guidelines and initiate events contrary to human interests.

Citizen Sentiment

The institute published a American survey showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that superhuman AI should not be created until it is proven safe or controllable. The survey of American respondents noted that only 5% backed the current situation of fast, unregulated development.

Corporate Goals

The leading AI companies in the United States, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an explicit goal of their work. Although this is one notch below superintelligence, some specialists also warn it could carry an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.

Manuel Morales
Manuel Morales

A seasoned gaming enthusiast and writer, Aria specializes in reviewing online casinos and sharing expert tips for maximizing player experiences.