According to cyber intelligence firm CloudSEK, cybercriminals are using ChatGPT's popularity to distribute malware via Facebook advertisements by hijacking Facebook accounts. CloudSEK's investigation uncovered 13 Facebook pages with over 500,000 followers, some of which were hijacked as early as February this year, and are being used to disseminate the malware via Facebook ads. 


CloudSEK's cyber intelligence analyst Bablu Kumar warned, "Cybercriminals are capitalising on the popularity of ChatGPT, exploiting Facebook's vast user base by compromising legitimate Facebook accounts to distribute malware via Facebook ads, putting users' security at risk. Our investigation has uncovered 13 compromised pages with over 500k followers, some of which have been hijacked since February 2023. We urge users to be vigilant and aware of such malicious activities on the platform."


ALSO READ: Cyber Fraudsters Are Using ChatGPT For Phishing Attacks, Police Caution


CloudSEK has also discovered at least 25 websites impersonating OpenAI.com, which are malicious sites tricking people into downloading and installing harmful software that poses a severe risk to their security and privacy. 


The malware can not only steal sensitive data such as personally identifiable information (PII), system information, and credit card details from the user's device but also has replication capabilities to spread across systems through removable media. With the ability to escalate privileges and persistently remain on the system, it poses a significant threat, says Kumar.


Meanwhile, a recent report by BestColleges has highlighted that more than half of college students believe that using AI tools to complete assignments or exams is cheating or plagiarism.


The report, which surveyed 1,000 undergraduate and graduate students, found that 43 percent of students have used AI tools for personal projects, out of curiosity, or for fun. Of those who have tried AI tools, 90 percent used them for these reasons. However, while 57 percent of students stated they did not plan to use AI tools to complete coursework or exams, 32 percent admitted to using them.


(With inputs from PTI)