Prime Minister Narendra Modi raised concerns on Friday about the misuse of artificial intelligence (AI) in the creation of deceptive 'deepfake' content. Speaking at the BJP's Diwali Milan event at the party's New Delhi headquarters, PM Modi emphasised the need for media to play a crucial role in educating the public about the potential crisis associated with deepfakes, as reported by the Hindustan Times.


During his address, the prime minister reaffirmed his commitment to transforming India into a 'Viksit Bharat' (which translates to developed India), emphasising that this vision is not merely rhetoric but a tangible reality. PM Modi also highlighted the widespread support for the 'vocal for local' initiative and underscored that India's accomplishments during the COVID-19 pandemic instilled confidence that the nation would continue progressing.


PM Modi also reportedly referred to a recent video that went viral on social media, where he was seen singing. The prime minister reportedly told his party workers that the video was forwarded and circulated by those who believed that it was real.


PM Modi's remarks come in the wake of a recent controversy involving deepfake videos featuring actresses Rashmika Mandanna and Katrina Kaif circulating online.


ALSO READ: As Deep Fake Videos Of Katrina Kaif And Rashmika Mandanna Trigger Outrage, Here Is What The Law Says


In response to the growing concern over deepfake content, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to social media companies. Sources reported on Tuesday that the advisory reiterates existing guidelines and emphasizes the importance of compliance with established regulations.


Section 66D of the Information Technology Act, 2000, which addresses penalties for cheating through impersonation using computer resources, was specifically mentioned. This section prescribes penalties of up to three years of imprisonment and a fine of up to Rs 1 lakh.


ALSO READ: Digital Doppelgangers: How To Navigate The Deepfake Deluge


The advisory also pointed to the "IT Intermediary Rules: Rule 3(1)(b)(vii)," which mandates social media intermediaries to exercise due diligence. Intermediaries are required to ensure that their rules, regulations, privacy policies, or user agreements explicitly prohibit the hosting of content that impersonates another individual.


Furthermore, as per "Rule 3(2)(b)," intermediaries must promptly take action, within 24 hours of receiving a complaint related to content involving impersonation in an electronic form. This includes digitally manipulated images of individuals, and the directive is to remove or disable access to such content swiftly.