This new AI worm can use email assistants to steal sensitive data, here’s how it works |

A group of researchers have developed a prototype AI worm called Morris II. According to the research papers (spotted by Wired), this first-generation AI worm can steal data, spread malware and spam users through AI-powered email assistants. However, it’s important to note that this research was conducted in a controlled environment and the worm has not been deployed in the real world.Yet, this development highlights the potential vulnerabilities in generative AI models and emphasises the need for strict security measures.

What the researchers have to say about the AI worm

The research team, comprising Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit, named the worm after the original Morris worm. This notorious computer worm unleashed in 1988. Unlike its predecessor, Morris II targets AI apps, specifically those using large language models (LLMs) like Gemini Pro, ChatGPT 4.0, and LLaVA, to generate text and images.
The worm uses a technique called “adversarial self-replicating prompts.” These prompts, when fed into the LLM, trick the model into replicating them and initiating malicious actions. This includes:The researchers described: The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images).”
The researchers successfully demonstrated the worm’s capabilities in two scenarios:

  • Spamming: Morris II generated and sent spam emails through the compromised email assistant.
  • Data Exfiltration: The worm extracted sensitive personal data from the infected system.

The researchers said that AI worms like this can help cyber criminals to extract confidential information, including credit card details, social security numbers and more. They also uploaded a video on YouTube to explain how the worm works:

ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications

What AI companies said about the worm

In a statement, an OpenAI spokesperson said: “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”
The spokesperson said that the company is making its systems more resilient and added that developers should use methods that ensure they are not working with harmful input.
Meanwhile, Google refused to comment about the research.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment