YouTube Director And Head Of Security Explains How It Will Deal With Users Who Don't Disclose Synthetic Content Amid Rise In Deepfakes
The tech giant said it will deploy a combination of machine learning (ML) and human raters as well as tools to ensure users are informed if they're viewing synthetic or AI-generated content.
At a time when deep fakes are on a rise in India, rung alarm bells and prompted the government to assign a special officer to look into deepfake videos on digital platforms, Google explained on how it will handle users on YouTube who do not disclose synthetic or artificial intelligence (AI)-generated content on the platform. The company, at a press event in New Delhi, told ABP Live that it will deploy a combination of machine learning (ML) and human raters as well as tools to ensure users are informed if they're viewing synthetic or AI-generated content.
"..So the first thing is that we're also rolling out a lot of tools that we expect a lot of content AI for it to allow creators and partners to generate AI based content. So anything that uses our tools to be uploaded to the platform, it will be very easy for us to identify that content. In terms of how we will identify and sort of enforce some of the policies that were mentioned," while adding, "There will be a combination of machine learning and human raters," Timothy Katz, Director, Global Head of Responsibility at Google-owned YouTube, told ABP Live.
Asked how YouTube would separate synthetic content by creators and news platforms, the top executive noted that it is going to be challenging, since it is an evolving space.
Notably, earlier this week, the tech giant announced that YouTube content creators must now disclose any modified or synthetic content they share on the platform, in a bid to address deepfakes. The tech giant stated that it will empower users to request the removal of AI-generated or other synthetic content on YouTube that replicates a recognisable individual, encompassing their face or voice, through the privacy request process.
"But we think that because we'll have such a high volume of content which is uploaded to YouTube that will have to fortify our machine learning (ML) systems to be really attuned to be able to detect content as and when possible, but it is going to be a process and we want to make sure that this is done consistently across the board, etc.
Asked if the video streaming giant is considering to increase the headcount of content moderation and AI and ML teams, Katz replied that it is primary focus area for the company.
"...The short answer is that it is a very large focus area for us. Responsibility is the foundational number one priority of the company -- like anything we do, from a growth perspective, has to be done responsibly. As we see a proliferation and a challenge there, accordingly we will make sure we have sufficient resources dedicated to that."
The government recently warned social media firms including Meta-owned Facebook and Google's YouTube to repeatedly remind users that local laws prohibit them from posting deepfakes and content that spreads obscenity or misinformation. The IT Ministry stated in a press release that all platforms have committed to align their content guidelines with its regulations.