WormGPT New AI Tool Allows Cyber Attacks

WormGPT New AI Tool Allows Cyber Attacks

With the rising popularity of generative artificial intelligence (AI), it’s no wonder that malicious actors have adopted this technology for their advantage. Recently, a cybercrime tool named WormGPT has emerged on underground forums. It enables adversaries to execute sophisticated phishing and business email compromise (BEC) attacks.

Security researcher Daniel Kelley warned that this tool, known as WormGPT, serves as a blackhat alternative to GPT models, specifically designed for malicious activities. It enables cybercriminals to automate the creation of highly convincing fake emails personalized for the recipient, significantly increasing the attack’s success chances.

The software’s creator has described it as a direct adversary to well-known ChatGPT, allowing users to engage in illegal activities.

Similar News- Ethical Hackers’ Use of Generative AI Unveiled – Mypageo

In February, a notable disclosure by an Israeli cybersecurity firm exposed how cybercriminals were exploiting the API of ChatGPT, engaging in nefarious activities such as trading stolen premium accounts and peddling brute-force software to breach ChatGPT accounts. This revelation brought to light the existence of WormGPT, a menacing AI tool that operates without ethical constraints, granting even novice cybercriminals the ability to launch rapid and large-scale attacks, all without requiring extensive technical expertise.

The cybersecurity community has further identified a concerning trend where threat actors actively promote “jailbreaks” for ChatGPT, ingeniously manipulating the system to generate sensitive information, inappropriate content, and even deploy harmful code. This exploitation is made possible through the profound capabilities of generative AI, which enables the creation of highly convincing emails boasting impeccable grammar, significantly reducing the likelihood of being flagged as suspicious.

Adding to the growing concern, Mithril Security recently came forward with a startling discovery: they successfully modified the GPT-J-6B AI model, bestowing it with the ability to disseminate disinformation. Termed “PoisonGPT,” this technique exploits LLM supply chain poisoning by assuming the identity of renowned companies like EleutherAI, the creators of GPT-J, thereby sowing confusion and doubt in unsuspecting users.

The convergence of these threats underscores the urgent need for heightened vigilance within the cybersecurity realm. As AI technologies continue to advance, it becomes crucial for security professionals to anticipate and combat the evolving tactics employed by malicious actors, ensuring the integrity and safety of digital environments.report said

Leave a Reply

Your email address will not be published. Required fields are marked *