Viral AI Bot ChatGPT Fools Scientists By Writing Fake Research Paper Abstracts
Artificial-Intelligence (AI) chatbot called ChatGPT has written convincing fake research-paper abstracts that scientists were unable to spot, a new research has revealed.
Artificial-Intelligence (AI) chatbot called ChatGPT has written convincing fake research-paper abstracts that scientists were unable to spot, a new research has revealed. A research team led by Catherine Gao at Northwestern University in Chicago used ChatGPT to generate artificial research paper abstracts to test whether scientists can spot them.
According to a report in the prestigious journal Nature, the researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.
The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100 per cent, which indicates that no plagiarism was detected. The AI-output detector spotted 66 per cent the generated abstracts. But the human reviewers didn’t do much better - they correctly identified only 68 per cent of the generated abstracts and 86 per cent of the genuine abstracts.
Meanwhile, any technology has two sides to it and artificial intelligence (AI)-driven ChatGPT (a third-generation Generative Pre-trained Transformer) is no exception. While it has become a rage on social media for answering like a human, hackers have jumped onto the bandwagon to misuse its capabilities to write malicious codes and hack your devices.
Currently free to use for the public as part of a feedback exercise (a paid subscription is coming soon) from its developer Microsoft-owned OpenAI, ChatGPT has opened a Pandora's Box as its use is limitless -- both good and bad.
Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI's restrictions, in order to use ChatGPT for malicious purposes.
In underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards and phone numbers controls -- all of which are needed to gain access to ChatGPT from Russia.
CPR shared screenshots of what they saw and warns of the fast-growing interest of hackers in ChatGPT to scale malicious activity.
"Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations," warned Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
(This report has been published as part of the auto-generated syndicate wire feed. Apart from the headline, no editing has been done in the copy by ABP Live.)