US, UK, A Dozen More Countries Sign AI Safety Agreement: All You Need To Know
The consensus among the 18 participating countries emphasises the imperative for companies engaged in AI design and utilisation to ensure the safety of customers and the broader public.
In a landmark move, the United States, the United Kingdom, and more than a dozen other nations announced on Sunday the unveiling of what a senior US official characterised as the inaugural comprehensive international agreement addressing the safety of artificial intelligence (AI) from illicit use. The agreement advocates for the integration of security measures into the design of AI systems. Outlined in a 20-page document revealed on Sunday, the consensus among the 18 participating countries emphasises the imperative for companies engaged in AI design and utilisation to ensure the safety of customers and the broader public, guarding against potential misuse. While the agreement is non-binding, it comprises largely general recommendations, encompassing the monitoring of AI systems for abuse, safeguarding data against tampering, and scrutinising software suppliers.
Jen Easterly, the director of the US Cybersecurity and Infrastructure Security Agency, underscored the significance of numerous countries endorsing the principle that prioritising safety in AI systems is paramount. Easterly emphasised that the guidelines represent a departure from the sole focus on innovative features and rapid market deployment, stating, "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs." She added that the agreement signifies a consensus that security should be a primary consideration during the design phase.
The agreement marks the latest initiative in a series of global efforts by governments to influence the trajectory of AI development. Despite being non-binding and largely advisory, the agreement involves influential nations such as the United States, the United Kingdom, Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
Addressing concerns about the potential misuse of AI technology, the framework focuses on preventing unauthorised access by hackers and suggests measures such as subjecting models to rigorous security testing before release. However, the guidelines do not delve into contentious issues surrounding the ethical use of AI or the ethical sourcing of data for these models.
The proliferation of AI has raised numerous apprehensions, including fears of its potential to disrupt democratic processes, facilitate fraud, and lead to significant job losses. While Europe has taken the lead in formulating regulations around AI, the United States has faced challenges in enacting comprehensive legislation, with a polarised Congress making limited progress.
The Biden administration has been actively advocating for AI regulation, seeking to mitigate risks to consumers, workers, and minority groups, while also bolstering national security. In October, the White House issued a new executive order aimed at addressing AI risks and promoting responsible AI development.