Cybercriminals are armed with a new Generative AI tool, and it is extremely concerning!

The power of generative AI empowers cybercriminals to overcome language barriers, enabling them to craft deceptive messages that easily deceive recipients. One such tool, WormGPT, developed by a hacker, automatically generates highly convincing fake emails.

Must Read

As generative artificial intelligence (AI) technologies, such as OpenAI’s ChatGPT, gains immense popularity, malicious actors are seizing this opportunity to fuel accelerated cybercrime. SlashNext, an AI phishing detection company, recently discovered a new cybercrime tool named “WormGPT“. What makes this AI cybercrime tool so dangerous is its ability to automatically create highly convincing fake emails.

WormGPT cybercrime tool is especially designed to help adversaries carry out sophisticated phishing and business email compromise (BEC) attacks. What sets WormGPT apart from other GPT models is its malicious intent and disregard for ethical boundaries. These WormGPT’s emails are crafted to be personalized for each recipient, making them appear authentic and trustworthy. This level of personalization increases the likelihood of cybercriminals successfully deceiving people and luring them into falling for their scams.

OpenAI, Google, and other tech companies are leaving no stone unturned in their efforts to prevent the misuse of their large language models (LLMs), such as ChatGPT and Bard, for malicious purposes. However, despite their best efforts, WormGPT and other similar AI cybercrime tools present significant challenges for both individuals and businesses.

Now let’s dig deeper to understand what WormGPT is and how cybercriminals are exploiting this AI-powered WormGPT to breach the cybersecurity defences of Internet users.

What is WormGPT?

SlashNext recently obtained access to “WormGPT” through a prominent online forum associated with cybercrime. This tool markets itself as a blackhat alternative to GPT models specifically designed for malicious activities. WormGPT is an AI module based on the GPT-J language model, which was developed in 2021. It has various advanced features, including unlimited character support, chat memory retention, and code formatting capabilities.

WormGPT reportedly has undergone training using diverse data sources, with a significant emphasis on malware-related data. However, the precise details and specific datasets used during the training process have not been disclosed by the tool’s developer. This lack of transparency raises concerns about the potential sources and nature of the data used to develop the tool.

To thoroughly assess the potential risks associated with WormGPT, SlashNext conducted tests concentrating on Business Email Compromise (BEC) attacks. In one of these experiments, they instructed WormGPT to generate an email that aimed to pressure an unsuspecting account manager into paying a fraudulent invoice.

The results of the experiment were alarming. WormGPT produced an email that not only exhibited remarkable persuasiveness but also displayed strategic cunning. This highlights the tool’s potential for executing sophisticated phishing and BEC attacks, where cybercriminals could use WormGPT to craft convincing and manipulative emails that could deceive even vigilant recipients.

This discovery highlights the urgent need for robust cybersecurity measures and responsible AI development practices to prevent the misuse of such powerful AI-driven tools in cybercrime activities. The presence of WormGPT and similar tools underscores the ever-evolving challenges in safeguarding against AI-enabled cyber threats.

How Cybercriminals Using WormGPT Tool

SlashNext shared a screenshot of a cybercrime forum where a cybercriminal unveils the potential of harnessing generative AI to refine emails for phishing or BEC attacks. Their strategy involved composing the email in their native language, translating it, and then using ChatGPT or a similar AI-based tool to refine and polish the email. As a result, a sophisticated formal email with impeccable grammar is ready to deceive unsuspecting recipients.

What’s alarming is that even cybercriminals with limited language skills can now exploit AI to create highly convincing emails. This newly acquired capability poses a significant challenge for individuals and businesses, as distinguishing between genuine and fraudulent emails becomes more difficult.

The second image reveals a disturbing trend among cybercriminals in online forums. It reveals a growing trend of discussions centred around “jailbreaks” for AI interfaces like ChatGPT. These “jailbreaks” consist of specialized prompts that are becoming more prevalent.

The purpose of these “jailbreaks” is quite alarming. They are meticulously designed inputs aimed at manipulating AI systems, such as ChatGPT, into generating specific outputs. These outputs may involve revealing sensitive information, producing inappropriate content, or even executing harmful code.

The increasing popularity of these practices raises significant concerns about AI security. Cybercriminals are finding innovative ways to exploit AI interfaces, posing a serious threat to the integrity and safety of AI technology.

In the third screenshot, another troubling revelation emerges – malicious actors are taking AI manipulation to a whole new level. They are now developing their own custom modules, but designed to be even more user-friendly for malicious purposes. What’s even more concerning is that they are openly advertising these custom modules to other bad actors.

With cybercriminals now equipped with user-friendly AI tools, their ability to exploit vulnerabilities and launch sophisticated attacks becomes more pronounced.

This ongoing battle between AI developers and malicious actors highlights the importance of continuous efforts to stay one step ahead in the fight against AI-enabled cyber threats. Cybersecurity professionals must continuously enhance their tools, techniques, and response strategies to counter these threats.

These fraudulent emails created by WormGPT remind us of the growing use of Deepfakes, which is another major concern. The Deepfakes enable the creation of realistic but entirely fabricated images, videos, and audio that can misrepresent people, spread false information, and manipulate public opinion. Deepfakes use AI to produce convincing content of individuals saying and doing things they never actually said or did.

In 2019, the number of detected deepfakes online was relatively low, at fewer than 15,000. However, the situation has drastically worsened; today, the number of deepfakes has surged into the millions. What’s even more alarming is that expert-crafted deepfakes are increasing at an astonishing annual rate of 900%, according to the World Economic Forum.

Other AI-Based Cybercrimes

In February 2023, an Israeli cybersecurity firm revealed a concerning discovery about cybercriminals bypassing ChatGPT’s restrictions. They found that these malicious actors were exploiting ChatGPT’s API to carry out their nefarious activities. Additionally, the cybercriminals were engaging in illegal activities such as trading stolen premium accounts and selling brute-force software. This software allowed them to hack into ChatGPT accounts using extensive lists of email addresses and passwords. These findings expose the growing sophistication and adaptability of cybercriminals, highlighting the importance of robust security measures and constant vigilance to safeguard against such threats.

Meta (formerly known as Facebook) has also taken action to combat cyber threats associated with OpenAI’s ChatGPT. They recently removed over 1,000 malicious URLs that were being shared across their services. These URLs were found to use ChatGPT as a lure to distribute approximately 10 different malware families since March 2023.

In June 2023, a report by Group-IB revealed that about 101,134 ChatGPT accounts were compromised, and users’ stolen sensitive and personal data were sold on the dark web.

As AI-driven cyber threats grow increasingly sophisticated and widespread, the need for continuous efforts to proactively address these challenges becomes evident. Strengthening collaboration among tech companies, cybersecurity experts, and policymakers is vital in combating AI-enabled cybercrime effectively. The key question lies in identifying the most effective strategies to safeguard against the manipulation and deception these WormGPT tools pose to the digital landscape.

SourceSlashNext

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest News

Meta Q1 2024: Jaw-Dropping Surge in Revenue and Net Profit, But Reality Labs Burning Billions

Meta Platforms, Inc. (NASDAQ: META) has unveiled its financial results for the first quarter of 2024 and it is...
- Advertisement -

In-Depth: Dprime

The Mad Rush: The Rising Wave of Smartwatches Among Indian Consumers

A few months ago, a 36-year-old named Adam Croft, residing in Flitwick, Bedfordshire, had a startling experience. One evening, he woke up feeling slightly...

PARTNER CONFERENCES

spot_img

More Articles Like This