Biden Administration Initiates Public Input For AI Safety Standards And Testing: All You Need To Know
The goal is to establish "industry standards around AI safety, security, and trust" that will position the United States as a global leader in the responsible development.
In a significant move, the Biden administration has announced its initial steps towards formulating crucial standards and guidance for the secure deployment of generative artificial intelligence (AI). The National Institute of Standards and Technology (NIST) under the Commerce Department revealed on Tuesday that it is soliciting public input until February 2 to inform key testing procedures essential for ensuring the safety of AI systems.
Commerce Secretary Gina Raimondo emphasised that this initiative stems from President Joe Biden's October executive order on AI. The goal is to establish "industry standards around AI safety, security, and trust" that will position the United States as a global leader in the responsible development and use of this rapidly evolving technology.
NIST is actively engaged in developing comprehensive guidelines for evaluating AI, fostering the creation of standards, and providing testing environments for the assessment of AI systems. The agency's request specifically seeks input from both AI companies and the general public regarding risk management in generative AI and strategies to mitigate the potential risks associated with AI-generated misinformation.
Generative AI, capable of producing text, photos, and videos in response to open-ended prompts, has generated both enthusiasm and concerns in recent months. The apprehension revolves around its potential to render certain jobs obsolete, influence elections, and surpass human capabilities with potentially catastrophic consequences.
President Biden's executive order directs agencies to establish standards for AI testing while addressing associated risks in areas such as chemical, biological, radiological, nuclear, and cybersecurity domains. NIST is actively working on developing guidelines for testing, including the exploration of "red-teaming" methodologies that could prove most effective for AI risk assessment and management. This involves setting best practices for external red-teaming, a concept well-established in cybersecurity, where simulations identify potential risks—the term "red team" harkening back to Cold War-era simulations.
In a landmark event held in August, the first-ever US public assessment "red-teaming" event took place during a major cybersecurity conference, organised by AI Village, SeedAI, and Humane Intelligence. With thousands of participants, the event aimed to challenge systems to produce undesirable outputs or fail, ultimately providing valuable insights into the risks associated with AI. The White House underscored the event's success, stating that it "demonstrated how external red-teaming can be an effective tool to identify novel AI risks."