In a concerning development, an AI chatbot named WormGPT has emerged as an alternative to ChatGPT, but with a dark twist – it is tailored to assist attackers. Developed by an anonymous hacker, WormGPT is being sold in a well-known hacking forum, providing malicious actors with a potent tool for their nefarious activities.
Unlike ChatGPT and Bard AI, WormGPT operates without any guardrails or limitations, allowing it to fulfill malicious requests without restraint. The chatbot’s creator stated that its primary goal is to serve as an alternative to ChatGPT, providing an avenue for illegal activities and enabling users to easily sell their ill-gotten creations in underground markets.
The origins of WormGPT can be traced back to March when it was initially introduced, gaining traction within the hackers community before its recent full-fledged launch. Utilizing an older open-source language model known as GPT-J from 2021, the developer trained WormGPT on data related to malware creation, giving birth to a powerful AI tool designed for malicious intent.
Through WormGPT, attackers can easily craft malware written in Python coding language and receive tips on executing various malicious attacks. SlashNext put WormGPT to the test, tasking it with generating a convincing email for a business email compromise (BEC) scheme, a sophisticated type of phishing attack. The results were disturbing, as the chatbot produced an email that not only appeared remarkably persuasive but also strategically cunning, showcasing its potential for facilitating highly sophisticated phishing and BEC attacks.
The unrestricted nature of WormGPT poses significant ethical concerns, as it opens up avenues for attackers on an unprecedented scale.
It is crucial for the cybersecurity community to address this alarming development proactively. Efforts should focus on monitoring and regulating the sale and usage of WormGPT.