Cybercriminals are increasingly turning to “dark” large language models (LLMs) such as WormGPT-4 and KawaiiGPT to automate and scale up their attacks. These LLMs lack ethical boundaries or content filters, making them powerful tools for phishing, malware creation, and other illicit activities.
WormGPT-4 is a more advanced and commercialized version of earlier malicious AI models. It is marketed with unrestricted capabilities and allows even non-expert hackers to generate socially engineered phishing campaigns, business-email-compromise (BEC) messages, and code for ransomware. It supports automated generation of functional malware and can help attackers produce convincing, well-structured messages that evade conventional security detection. Unit 42
KawaiiGPT, on the other hand, offers a more accessible entry point for criminals. It is lightweight and easy to deploy, often shared via public repositories. Despite being simple, it remains effective: attackers can use it to craft phishing templates, code snippets, or social engineering scripts quickly and with minimal technical skill. Unit 42
These dark LLMs enable a new class of threat actors—people with limited hacking background who can still launch sophisticated campaigns. Thanks to automation, they can run phishing attacks, scam operations, or malware deployment with high efficiency and minimal effort. Ke-La Info
WormGPT-4, for example, offers tiered pricing—making it scalable for criminals. Meanwhile, KawaiiGPT’s free or low-cost availability lets more bad actors access powerful AI-driven tools. Unit 42+1
Security experts warn that this “cybercrime-as-a-service” model, powered by malicious AI, drastically lowers the skill barrier. As these tools become more refined, defenders will need to update their strategies: using AI-aware detection, monitoring for AI-generated attack patterns, and raising awareness about this new threat
Leave a comment