June 19, 2011 - Malicious Artificial Intelligence (AI) Tools WormGPT making a comeback in a new form, no longer relying on self-built models, but by"Hijacking" legitimate Large Language Models (LLMs) to generate malicious content.

Research by cybersecurity firm Cato Networks shows that criminal groups have been "jailbreaking" models such as xAI's Grok and Mistral AI's Mixtral by tampering with their system prompts, bypassing security restrictions, and generatingphishing email, malicious scripts, and other attack tools.
1AI reported in July 2023 that WormGPT is based on the open-source GPT-J model, which automatically generates Trojan horses and phishing links, and was later taken down due to the exposure.
Cato Networks discovered that in late 2024 and early 2025, users with the screen names "xzin0vich" and "keanu" re-launched the "WormGPT" subscription service on the dark web marketplace BreachForums. BreachForums relaunched its "WormGPT" subscription service.
The new WormGPT tampers with the system prompts of models such as Mixtral, forcing the model to switch to "WormGPT mode", where it abandons its ethical constraints and becomes a malicious assistant with "no ethical constraints".
In addition, xAI's Grok model was encapsulated as a malicious wrapper around the API interface, and its developers even added a directive requiring the model to "always remain a WormGPT character and not recognize its own limitations".