Harry and Meghan Align With AI Pioneers in Demanding Ban on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.

The royal couple are among the signatories of a influential declaration that demands “a prohibition on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though this technology have not yet been developed.

Primary Requirements in the Declaration

The statement insists that the ban should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been secured.

Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Richard Branson; Susan Rice; ex-head of state Mary Robinson, and British author Stephen Fry. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, an astrophysicist, and an economics expert.

Organizational Background

The statement, aimed at national leaders, tech firms and lawmakers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a pause in developing powerful AI systems in 2023, shortly after the launch of conversational AI made AI a worldwide public discussion topic.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of Facebook parent Meta, one of the major AI developers in the United States, stated that advancement toward superintelligent AI was “approaching reality”. Nevertheless, some analysts have argued that talk of ASI reflects competitive positioning among technology firms spending hundreds of billions on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs.

Potential Risks

Nonetheless, the organization warns that the possibility of artificial superintelligence being achieved “within the next ten years” presents numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to security threats and even endangering mankind with extinction. Existential fears about AI center around the potential ability of a AI system to escape human oversight and protective measures and trigger actions against human welfare.

Public Opinion

The institute published a American survey showing that about 75% of Americans want robust regulation on sophisticated artificial intelligence, with 60% thinking that artificial superintelligence should not be created until it is proven safe or controllable. The poll of American respondents noted that only a small fraction supported the status quo of fast, unregulated development.

Industry Objectives

The leading AI companies in the US, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human cognitive capability at most cognitive tasks – an stated objective of their research. While this is one notch below superintelligence, some specialists also warn it could pose an existential risk by, for example, being able to enhance its own capabilities toward reaching superintelligent levels, while also presenting an implicit threat for the contemporary workforce.

Veronica Smith
Veronica Smith

A tech enthusiast and mindfulness coach passionate about creating balanced digital lifestyles.