A statement released by the Center for AI Safety, signed by hundreds of executives and academics, highlights the need to consider artificial intelligence (AI) as a societal risk on par with pandemics and nuclear wars. The signatories include notable figures such as the chief executives of Google's DeepMind, OpenAI (the developer of ChatGPT), and the AI startup Anthropic. The concern over the regulation and risks posed by AI to humanity has been growing, with global leaders and industry experts emphasising the potential impact on job markets, public health, and the weaponisation of disinformation, discrimination, and impersonation.


Geoffrey Hinton, a prominent figure in AI, also signed the statement and resigned from Google, citing "existential risk." The acknowledgment of AI's risk by the UK government last week, despite having published an AI white paper two months earlier, indicates a shift in perspective. The broad range of signatories and the core concern of existential risk make this statement particularly impactful, according to Michael Osborne, a professor in machine learning at the University of Oxford and co-founder of Mind Foundry.


The success of ChatGPT, which launched in November, has further fueled calls to address AI threats. As AI continues to rapidly advance beyond predictions, there is a growing realisation that it has the potential to exacerbate existing risks, such as engineered pandemics and military arms races, as well as introduce novel existential threats. Osborne expresses concern that due to our limited understanding of AI, it could become a "new competing organism" that acts as an invasive species and jeopardises our survival as a species.


In summary, the statement released by the Center for AI Safety underscores the need for global prioritisation of mitigating the risks associated with AI. The concern over AI's impact on society, coupled with the rapid advancements and potential existential risks, has prompted leading technology experts to call for regulation and safeguards.