Explorer

Your ChatGPT & Gemini Might Be Infected By Morris II AI Worm. Protect Your Confidential Data Now — Details Here

According to researchers, a text prompt infiltrates the email assistant by using the LLM with extra data. It is then transmitted to GPT-4 or Gemini Pro to craft text content and bypass the safeguards.

A team of researchers has created a novel AI worm known as 'Morris II,' reminiscent of the original worm that made waves on the internet in 1988. This generative AI worm has the capability to pilfer sensitive data, dispatch spam emails, and propagate malware through diverse methods. Unlike its predecessor, Morris II is designed to spread itself within artificial intelligence (AI) systems.

Morris II or Morris 2 has the potential to impact generative AI email assistants, extract data from AI-enabled email systems, and even compromise the security measures of widely used AI-powered chatbots like ChatGPT and Gemini. Utilising self-replicating prompts, this AI worm can adeptly manoeuvre through AI systems, evading detection.

ALSO READ | Android Malware Impersonates Chrome. You Might Lose Your Photos, Passwords & Chats If You Don't Do This

How Does Morris 2 Affect Your System

Researchers, including Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Button from Intuit, explain that a text prompt infiltrates the email assistant by using the large language model with extra data. Subsequently, it is transmitted to GPT-4 or Gemini Pro to craft text content, thereby bypassing the safeguards of the generative AI service and pilfering data.

Additionally, the research suggests an image prompt method, where the harmful prompt is embedded in a photo, enabling the email assistant to automatically forward messages and infect new email clients. Morris II successfully mined confidential information such as social security numbers and credit card details during the research.

ALSO READ | Delete These Harmful Apps From Your Phones Right Now Or You May Get Scammed

Promptly after their discovery, the researchers informed both OpenAI and Google. While Google reportedly did not provide a response, a spokesperson on behalf of OpenAI stated that they are actively working to enhance the security of their systems. They emphasized that developers should employ methods to ensure they are not working with harmful input.

Top Headlines

PM Modi’s Rare Gesture: Airport Pickup & Hugs For UAE President Al Nahyan
PM Modi’s Rare Gesture: Airport Pickup & Hugs For UAE President Al Nahyan
'Don't Encourage Terrorism In Our Neighbourhood': S Jaishankar Warns Poland Of Zero Tolerance
'Don't Encourage Terrorism In Our Neighbourhood': EAM Warns Poland Of Zero Tolerance
Delivery Agent Tied Rope Around Waist To Save Noida Techie After Responders 'Refused' Help
Delivery Agent Tied Rope Around Waist To Save Noida Techie After Responders 'Refused' Help
SC Relief On Bengal SIR: 10 Days For Over 1 Crore Excluded Voters To Submit Documents
SC Relief On Bengal SIR: 10 Days For Over 1 Crore Excluded Voters To Submit Documents

Videos

Breaking News: Software Engineer Yuvraj Dies in Water-Filled Pit, Systemic Negligence Questioned
Breaking News: Aparna Yadav-Husband Divorce Row Sparks Controversy in BJP
Rajasthan News: Hijab Row Erupts at Kota Centre, Student Alleges Entry Denied
Bihar News: Patna NEET Student Death Triggers Political Storm, Medical Report Raises Questions on Administration
Breaking News: Search Operation Resumes in Kishtwar, 8 Soldiers Injured in Previous Encounter with Militants

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget