During a hearing with US lawmakers, Sam Altman, the CEO of OpenAI, emphasised the need for regulation of artificial intelligence (AI) following the remarkable performance of the lab's pathbreaking chatbot, ChatGPT. Concerns about AI's advancements were expressed by the lawmakers, with Senator Richard Blumenthal beginning the hearing by having a computer-generated voice, similar to his own, read a text written by the chatbot.
Blumenthal, a Democrat, highlighted that AI technologies are no longer mere research experiments but a tangible reality. Altman's testimony before a US Senate judiciary subcommittee aimed to educate lawmakers and encourage Congress to establish new regulations for big tech companies. Altman acknowledged the potential risks associated with AI, stating that if the technology goes wrong, the consequences could be significant.
Following the viral release of ChatGPT, which amazed and concerned users with its human-like content generation abilities, governments worldwide are under pressure to act swiftly. Altman has become a prominent figure in the field of AI, promoting his company's technology while warning about potential negative impacts on society.
Altman explained that OpenAI was founded on the belief that AI could improve various aspects of life but also recognised the accompanying risks. He emphasised the importance of regulatory intervention to mitigate the dangers posed by increasingly powerful AI models, including disinformation and job security concerns. Altman proposed a combination of licensing, testing requirements, and the establishment of a dedicated US agency to handle AI, while advocating for global cooperation in setting rules and standards.
Senator Blumenthal highlighted Europe's progress in regulating AI with the AI Act, scheduled for a vote in the European Parliament. The proposed EU measure includes potential bans on biometric surveillance, emotion recognition, and certain policing AI systems. Additionally, US lawmakers emphasised the need for transparency measures for generative AI systems like ChatGPT and DALL-E, suggesting user notifications to indicate when the output is generated by a machine.
Experts at the hearing cautioned that AI technology is still in its early stages and that there are ethical considerations to explore. Gary Marcus, a panelist and professor emeritus at New York University, stated that machines capable of self-improvement or self-awareness are not yet a reality, and there may be concerns associated with pursuing such capabilities.
Christina Montgomery, the chief privacy and trust officer at IBM, urged lawmakers to avoid overly broad strokes in AI regulation, noting that the impact of chatbots assisting with restaurant recommendations differs from systems making decisions on credit, housing, or employment.
The hearing underscored the necessity of regulatory measures to address the risks and potential of AI while balancing societal impact and ethical considerations.