(Source: ECI/ABP News/ABP Majha)
Your ChatGPT & Gemini Might Be Infected By Morris II AI Worm. Protect Your Confidential Data Now — Details Here
According to researchers, a text prompt infiltrates the email assistant by using the LLM with extra data. It is then transmitted to GPT-4 or Gemini Pro to craft text content and bypass the safeguards.
A team of researchers has created a novel AI worm known as 'Morris II,' reminiscent of the original worm that made waves on the internet in 1988. This generative AI worm has the capability to pilfer sensitive data, dispatch spam emails, and propagate malware through diverse methods. Unlike its predecessor, Morris II is designed to spread itself within artificial intelligence (AI) systems.
Morris II or Morris 2 has the potential to impact generative AI email assistants, extract data from AI-enabled email systems, and even compromise the security measures of widely used AI-powered chatbots like ChatGPT and Gemini. Utilising self-replicating prompts, this AI worm can adeptly manoeuvre through AI systems, evading detection.
ALSO READ | Android Malware Impersonates Chrome. You Might Lose Your Photos, Passwords & Chats If You Don't Do This
How Does Morris 2 Affect Your System
Researchers, including Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Button from Intuit, explain that a text prompt infiltrates the email assistant by using the large language model with extra data. Subsequently, it is transmitted to GPT-4 or Gemini Pro to craft text content, thereby bypassing the safeguards of the generative AI service and pilfering data.
Additionally, the research suggests an image prompt method, where the harmful prompt is embedded in a photo, enabling the email assistant to automatically forward messages and infect new email clients. Morris II successfully mined confidential information such as social security numbers and credit card details during the research.
ALSO READ | Delete These Harmful Apps From Your Phones Right Now Or You May Get Scammed
Promptly after their discovery, the researchers informed both OpenAI and Google. While Google reportedly did not provide a response, a spokesperson on behalf of OpenAI stated that they are actively working to enhance the security of their systems. They emphasized that developers should employ methods to ensure they are not working with harmful input.