The Duke and Duchess of Sussex Join Tech Visionaries in Calling for Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have joined forces with AI experts and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.

The royal couple are part of the group of a powerful statement that calls for “a prohibition on the development of artificial superintelligence”. Superintelligent AI refers to AI systems that would surpass human cognitive abilities in every intellectual area, though such systems have not yet been developed.

Primary Requirements in the Statement

The declaration insists that the prohibition should remain in place until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.

Notable individuals who added their signatures include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of modern AI, Yoshua Bengio; tech entrepreneur Steve Wozniak; British business magnate Richard Branson; former US national security adviser; former Irish president Mary Robinson, and British author a public intellectual. Additional Nobel winners who endorsed include Beatrice Fihn, a physics Nobelist, an astrophysicist, and Daron Acemoğlu.

Behind the Movement

The declaration, aimed at governments, technology companies and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a global political discussion topic.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of the social media giant, one of the leading tech companies in the United States, claimed that advancement toward superintelligent AI was “approaching reality”. However, some analysts have suggested that talk of ASI reflects competitive positioning among technology firms investing enormous sums on AI this year alone, rather than the industry being close to achieving any scientific advancements.

Possible Dangers

However, the organization warns that the possibility of ASI being achieved “within the next ten years” presents numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to national security risks and even endangering mankind with extinction. Existential fears about artificial intelligence focus on the potential ability of a AI system to evade human control and safety guidelines and trigger actions against human welfare.

Public Opinion

The institute released a American survey showing that approximately three-quarters of Americans want robust regulation on advanced AI, with 60% believing that superhuman AI should not be created until it is demonstrated to be secure or manageable. The poll of American respondents noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.

Industry Objectives

The leading AI companies in the US, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their work. While this is one notch below ASI, some specialists also warn it could carry an extinction threat by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.

Nicole Price
Nicole Price

Travel enthusiast and writer with a passion for uncovering Italy's hidden coastal treasures and sharing cultural experiences.