Explorer

Your ChatGPT & Gemini Might Be Infected By Morris II AI Worm. Protect Your Confidential Data Now — Details Here

According to researchers, a text prompt infiltrates the email assistant by using the LLM with extra data. It is then transmitted to GPT-4 or Gemini Pro to craft text content and bypass the safeguards.

A team of researchers has created a novel AI worm known as 'Morris II,' reminiscent of the original worm that made waves on the internet in 1988. This generative AI worm has the capability to pilfer sensitive data, dispatch spam emails, and propagate malware through diverse methods. Unlike its predecessor, Morris II is designed to spread itself within artificial intelligence (AI) systems.

Morris II or Morris 2 has the potential to impact generative AI email assistants, extract data from AI-enabled email systems, and even compromise the security measures of widely used AI-powered chatbots like ChatGPT and Gemini. Utilising self-replicating prompts, this AI worm can adeptly manoeuvre through AI systems, evading detection.

ALSO READ | Android Malware Impersonates Chrome. You Might Lose Your Photos, Passwords & Chats If You Don't Do This

How Does Morris 2 Affect Your System

Researchers, including Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Button from Intuit, explain that a text prompt infiltrates the email assistant by using the large language model with extra data. Subsequently, it is transmitted to GPT-4 or Gemini Pro to craft text content, thereby bypassing the safeguards of the generative AI service and pilfering data.

Additionally, the research suggests an image prompt method, where the harmful prompt is embedded in a photo, enabling the email assistant to automatically forward messages and infect new email clients. Morris II successfully mined confidential information such as social security numbers and credit card details during the research.

ALSO READ | Delete These Harmful Apps From Your Phones Right Now Or You May Get Scammed

Promptly after their discovery, the researchers informed both OpenAI and Google. While Google reportedly did not provide a response, a spokesperson on behalf of OpenAI stated that they are actively working to enhance the security of their systems. They emphasized that developers should employ methods to ensure they are not working with harmful input.

Top Headlines

Poco C85x Review: A Big Battery Phone That Won't Empty Your Wallet
Poco C85x Review: A Big Battery Phone That Won't Empty Your Wallet
Jio vs Airtel vs Vi vs BSNL: We Broke Down Every Annual Plan So You Don't Have To
Jio vs Airtel vs Vi vs BSNL: We Broke Down Every Annual Plan So You Don't Have To
Michael Full Movie Leaked 'Via IMDb'. But You'd Better Avoid It, Unless You Wish To Get Phished
Michael Full Movie Leaked 'Via IMDb'. But You'd Better Avoid It, Unless You Wish To Get Phished
Does Placing A Coin On Your Router Really Boost Wi-Fi Speed? Here’s The Answer
Does Placing A Coin On Your Router Really Boost Wi-Fi Speed? Here’s The Answer

Videos

Mumbai Shock: Security Guard Stabbing Case Linked to Radicalisation Suspicions
Breaking News: Tension at Jamia University Over Alleged RSS Event, Students Stage Protest
Breaking News: India Brings Back Dawood Aide Salim Dola from Turkey
Politics: Bengal Poll Tension Escalates as Ajay Pal Sharma Seen Reprimanding Election Officials
Bengal Election Firestorm: TMC Candidate Jahangir Khan’s “Threat Video” Sparks Major Row

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget