US President Joe Biden on Monday issued an executive action aimed at mitigating potential risks associated with artificial intelligence (AI) for consumers, workers, minority groups, and national security. The executive order, expected to be signed at the White House, mandates that developers of AI systems posing threats to US national security, the economy, public health, or safety must share the outcomes of safety tests with the US government before releasing their technology to the public. This requirement aligns with the provisions of the Defense Production Act.


Furthermore, the executive action instructs government agencies to establish standards for these safety tests and address associated risks related to chemical, biological, radiological, nuclear, and cybersecurity factors.


This move represents the latest step by the Biden administration to create a regulatory framework around AI, given the rapid advancements and increasing popularity of the technology, which has so far operated in a relatively unregulated environment. The order's reception among industry and trade groups has been mixed.


ALSO READ: Silicon Valley AI Major Becomes First US IT Firm To Open Office In Bihar: All You Need To Know


Bradley Tusk, the CEO of Tusk Ventures, a venture capital firm with investments in the tech and AI sectors, expressed approval of the move but expressed concerns that tech companies may hesitate to share proprietary data with the government out of fear that it could be shared with their competitors. He emphasised the need for a robust enforcement mechanism.


NetChoice, a national trade association representing major tech platforms, criticised the executive order, labelling it as an "AI Red Tape Wishlist" that could potentially hinder the entry of new companies and competitors into the marketplace while expanding the federal government's influence over American innovation.


This executive order surpasses voluntary commitments made earlier this year by prominent AI companies such as OpenAI, Alphabet, and Meta Platforms. These companies pledged to watermark AI-generated content to enhance technology safety.


As part of this order, the Commerce Department will develop guidance for content authentication and watermarking to label items generated by AI, ensuring clear communication in government communications, as stated by the White House in a release.


ALSO READ: Generative AI Has Hit A Ceiling, GPT-5 May Not Surpass Its Predecessor: Bill Gates


In a separate development, the Group of Seven (G7) industrialised nations is set to establish a code of conduct for companies engaged in the development of advanced AI systems, according to a G7 document.


A senior administration official, briefing reporters ahead of the order's official release, countered criticism that Europe had been more assertive in regulating AI than the United States. The official affirmed the White House's belief that legislative action by Congress is necessary for effective AI governance, particularly in the realms of data privacy.


In response to the executive order, Senator Mark Warner, a Democrat who chairs the Senate Select Committee on Intelligence, acknowledged it as a positive step but stressed the need for additional legislative measures. US officials have voiced concerns that AI could exacerbate bias and civil rights violations. The executive order addresses these concerns by calling for guidelines to prevent AI algorithms from contributing to discrimination, particularly in sectors such as landlord practices, federal benefits programs, and federal contractors. The order also requires the development of "best practices" to mitigate potential harm to workers, including job displacement, and calls for a report on labor market impacts.