Lawmakers in the European Union (EU) have reached an agreement on proposed changes to artificial intelligence (AI) regulations, which now include a ban on the use of AI technology in biometric surveillance and a requirement for generative AI systems, such as ChatGPT, to disclose AI-generated content. These amendments are part of the EU Commission's landmark law designed to safeguard citizens from potential risks associated with AI. However, these changes may lead to a clash with EU member countries that oppose a complete ban on AI in biometric surveillance.


Prominent AI scientists and industry leaders, including Elon Musk of Tesla and Sam Altman of OpenAI, have expressed concerns about the rapid adoption of Microsoft-backed OpenAI's ChatGPT and other similar bots and the potential risks they pose to society.


Brando Benifei, co-rapporteur of the bill, stated, "While Big Tech companies are expressing concern about their own creations, Europe has taken the initiative to propose a concrete response to the emerging risks posed by AI."


Aside from the ban on biometric surveillance, EU lawmakers have introduced additional changes to the proposed legislation. They now require companies using generative tools to disclose copyrighted materials used for training their systems. Moreover, companies working on high-risk applications must conduct fundamental rights impact assessments and evaluate environmental implications.


For systems like ChatGPT, transparency measures would include disclosing AI-generated content, distinguishing deep-fake images from real ones, and implementing safeguards against illegal content.


Microsoft and IBM have welcomed the latest developments by EU lawmakers, expressing their support while also anticipating further refinement of the proposed legislation.


Before the draft rules become law, lawmakers will need to negotiate the details with EU member countries.


Although many major tech companies acknowledge the risks associated with AI, Meta, the parent company of Facebook and Instagram, has dismissed warnings about potential dangers. Meta's chief AI scientist, Yann LeCun, stated at a conference in Paris that AI is inherently beneficial because it makes people smarter.


The current draft EU law also includes AI systems capable of influencing voters and election outcomes, as well as systems used by social media platforms with over 45 million users, in the high-risk category. Meta and Twitter would fall under this classification.


EU industry chief Thierry Breton emphasised the importance of addressing the questions surrounding AI promptly and responsibly. He plans to travel to the United States to meet with Meta CEO Mark Zuckerberg and OpenAI's Sam Altman to discuss the draft AI Act.


The EU Commission initially announced the draft rules two years ago, aiming to establish a global standard for AI, which is integral to almost every industry and business. The EU aims to catch up with AI leaders, such as the United States and China, in this regard.