ChatGPT has an evil twin and wants to keep your money.
WormGPT was created by a hacker and is designed for phishing attacks on a larger scale than ever before.
Cybersecurity firm SlashNext confirmed that the “sophisticated AI model” was developed solely with malevolent intent.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley wrote on the website. “Supposedly, WormGPT was trained on a wide range of data sources, concentrating particularly on malware-related data.”
The firm also said that this type of software is just one example of the threat from artificial intelligence modules based on the GPT-J language model, and could cause harm even if used by a beginner.
They played with WormGPT to see its potential dangers and how extreme they can be, asking it to create phishing emails.
“The results were disturbing,” confirmed the cyber expert. “WormGPT produced an email that was not only remarkably persuasive but also strategically astute, showing its potential for sophisticated phishing and BEC attacks.
“In short, it’s similar to ChatGPT but has no boundaries or ethical constraints,” Kelley added chillingly.

That means AI has made it easy to recreate phishing emails, so it’s important to be vigilant when checking your inbox, especially when you’re asked for personal information like bank details.
Even if an email appears to be from an official sender, keep an eye out for anything unusual or misspellings in the email address.
People should also be vigilant before opening attachments and avoid clicking anything that says “enable content.”
There is also a new trend among cybercriminals on ChatGPT offering “jailbreaks”, which are crafted inputs that tamper with the interface and are designed to divulge sensitive information, produce inappropriate content, or execute harmful code.
“Generative AI can create emails with impeccable grammar, making them appear legitimate and reducing the likelihood of them being flagged as suspicious,” Kelley wrote. “The use of generative AI democratizes the execution of sophisticated BEC attacks.
“Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.”