At a time when misinformation, especially that originating from artificial intelligence (AI) and deepfakes is on the rise, Google India on Wednesday highlighted how it is protecting users from the risks of AI generated media in the country. The tech giant mentioned "there is no silver bullet to combat deep fakes and AI-generated misinformation" while adding it is looking to help address these potential risks in multiple ways.
It is to be noted that the government recently set deepfake crackdown in motion, with the Union Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar syaing that the IT ministry and the Centre will nominate an official and will take 100 per cent compliance from all the platforms, in a bid to deal with the so-called synthetic content and deefake online.
"This is why we've added “About this result” to generative AI in Google Search to help people evaluate the information they find in the experience. We also introduced new ways to help people double check the responses they see in Google Bard by grounding it in Google Search," Michaela Browning, VP, Government Affairs and Public Policy, Google Asia Pacific, wrote in a blog post.
According to the search engine giant, battling AI generated misinformation and deepfakes "requires a collaborative effort", one that involves open communication, rigorous risk assessment, and proactive mitigation strategies.
The tech giant is partnering with the government on some programmes and will continue dialogue, including through its upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit, amid its continued implementation of AI, and more recently, generative AI, into more Google experiences.
"For example, on YouTube, we use a combination of people and machine learning technologies to enforce our Community Guidelines, with reviewers across Google operating around the world. In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is helping to continuously increase both the speed and accuracy of our content moderation systems," Browning added.
The company is testing for a wide range of safety and security risks, including the rise of new forms of AI-generated, photo-realistic, synthetic audio or video content known as “synthetic media”. While this technology has useful applications -- for instance, by opening new possibilities to those affected by speech or reading impairments, or new creative grounds for artists and movie studios around the world -- it raises concerns when used in disinformation campaigns and for other malicious purposes, through deep fakes. The potential for spreading false narratives and manipulated content can have negative implications.
"Equally, context is important with images, and we’re committed to finding ways to make sure every image generated through our products has metadata labeling and embedded watermarking with SynthID, currently being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic images. We’re also making progress on tools to detect synthetic audio -- in our AudioLM work, we trained a classifier that can detect synthetic audio in our own AudioLM model with nearly 99 per cent accuracy," Browning noted.