How self replicating AI worms AI worms can spread through gen AI powered email systems?

How self replicating AI worms AI worms can spread through gen AI powered email systems?

 Digital ecosystem is evolving at a very fast pace . And one of the top notch trending technologies is artificial Intelligence . Artificial Intelligence has gripped the whole globe with its magical capabilities. One of its most  remarkable  type , Generative AI has been spoon feeding us since its creation. Got stuck in generating a code , Chat gpt is there to puzzle it out for you . Need to generate an image in no time , Gen AI models like Gemini and mid journey will do it for you.

Along with getting unlimited benefits from generative AI models we cant get our eyes closed towards the risks associated with them. A malware has been recently invented that  not only could  steal users’ confidential data but also send spam emails to millions of users by exploiting Generative AI email assistants.

Morris II , a new AI worm that steals confidential data by “Adversarial self replication”  to confuse AI models. Named after the first ever internet worm lunched on internet in 1988 , Morris II could easily breach  security walls created by generative AI systems.  

As reported by Wired , a group of researchers from Corneal University , Technion Israel  institute of technology and intuit created new type of Malware , Moris II that could spread from one system to another potentially stealing data .

“It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

Adversarial Self replication

AI worms functions by an exploitation technique called Adversarial Self Replication that confuses Gen AI email  systems. This menacing act can be carried out by targeting the email systems with messages by  forwarding them again and again. The AI model behind email assistant gets confused and provides confidential data and sensitive info like phone number, credit card details, social security numbers  in no time without realizing the threat it holds.

How it works?

Text-Based Intrusion: Deceptive prompts cleverly hidden within emails exploit weaknesses in the assistant’s security protocols.

Image-Based Gambit: Covert prompts embedded within seemingly harmless images enhance the worm’s capacity to propagate, showcasing the sophistication of its strategies.

How to be safe?

Currently, it’s essential to emphasize that the AI worm remains a novel concept and hasn’t been encountered in real-world scenarios. Nonetheless, researchers view it as a potential security threat that developers and businesses must acknowledge, particularly with the increasing interconnectedness and capabilities of AI systems to act on our behalf.

Secure Design: It’s crucial for developers to prioritize security when designing AI systems, implementing standard security measures and being cautious about blindly trusting AI model outputs.

Human Oversight: Maintaining human involvement in decision-making and ensuring that AI systems don’t act without approval can serve as a protective measure against potential risks.

Monitoring: Regularly monitoring AI systems for unusual behaviors, such as excessive prompt repetition, can aid in the early detection of potential attacks.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *