Deepfake Dilemma: Tech Alone Cannot Stop AI Misuse, Policy & Awareness Must Also Play A Role
While technology has been hailed as a solution to combat deepfakes, it is not a silver bullet. Tech alone is not enough to combat deepfakes and we need to make serious efforts to educate people.
By Jaspreet Bindra
The rise of deepfakes has sparked widespread concern about the misuse of technology. Recently, the Prime Minister of India, Narendra Modi, said in a speech that he had also been the victim of deepfakes which he discussed with ChatGPT-maker OpenAI. This is not just a one-off case - even India's IT Minister has warned AI and social media companies about the need to manage deepfakes. Several celebrities from the entertainment industry have suffered from deepfake-related incidents.
While technology has been hailed as a solution to combat deepfakes, it is not a silver bullet. Technology alone is not enough to combat deepfakes and we need to make serious efforts to educate people.
World’s First Deepfake
The world’s first ‘certified’ deepfake was probably of an AI professional, Nina Schick, delivering a warning about how “the lines between reality and fiction are becoming blurred." Deepfakes are a type of artificial intelligence (AI) generated content that uses deep learning algorithms to create fake audio, video or images that are incredibly realistic and convincing.
The word 'deepfake' was first coined in 2017 on Reddit and denotes the combination of 'deep learning' and 'fake'. Redditors have morphed the faces of actors like Gal Gadot and Taylor Swift. Around 95 per cent of the deepfakes are used to create a wide range of fake content, including fake videos, audio and images. In most cases, women are the victims of deepfakes, resulting in emotional distress.
How Are Deepfakes Created?
Deepfakes are created using a type of AI called a Generative Adversarial Network (GAN). A GAN consists of two neural networks; a generator and a discriminator.
The generator creates fake content, while the discriminator evaluates the content and tells the generator whether it’s realistic or not. Initially, it was fun to flaunt the prowess of AI. However, the idea took a bad turn with political leaders’ speeches getting manipulated to cause unrest and ‘revenge porn’ clips made and circulated to larger audiences.
There is now a fear that future elections in India and other countries could be determined using deepfakes and democracy itself could be subverted.
Deepfakes pre-date Generative AI, but the latter has put the menace on steroids. Deepfakes are becoming increasingly sophisticated, making them challenging to detect. Deep learning algorithms are very good at analysing facial expressions and body movements, making these fakes incredibly realistic.
These can sometimes be detected through visual and auditory irregularities and there are AI tools to identify them. The lack of diverse and high-quality training data hinders the development of effective deepfake detection models. Many deepfake detection models suffer from false positives, which can lead to real content being mislabelled as deepfakes. It is a battle of AI against AI and will continue forever.
ALSO READ: GenAI e-KYC Deepfakes: Verifiable Credentials On Massive-Scale Blockchains Are The Solution
How To Battle Deepfakes?
Tech companies should work together to develop and implement effective deepfake detection to mitigate the scams. In the current scenario, blockchain-based solutions can be used as an effective solution to weed out deepfakes. They can be used to create tamper-proof records of digital content and ensure authenticity.
Big tech firms and tech startups are focused on developing digital watermarks and classifiers to wipe out or manage the problem.
Tech Alone Cannot Solve The Problem
Technology alone cannot solve the problem. Governments must establish clear regulations for the use of deepfake technology and ensure compliance with its usage. Awareness and education at societal and school levels can help.
We must become media-savvy consumers and question the authenticity of suspicious content. In some countries, children are taught in schools to distinguish between real and fake content; even we need to exercise the same practice in our schools.
To a certain extent, even researchers from diverse fields must work together to better understand the impact of deepfakes on society and develop effective countermeasures to combat it.
Deepfakes are similar to ‘online acid attacks’ meant to take revenge and dishonour a person. Combating deepfakes will require a multifaceted approach, which will go beyond technology. While technology can play a crucial role in detecting and mitigating deepfakes, it is not a substitute for concerted efforts from governments, tech companies and the public.
(The author is the Co-Founder of AI&Beyond)
Disclaimer: The opinions, beliefs, and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs, and views of ABP Network Pvt. Ltd.