← Other Blogs
GenAI and security
Social engineering

FraudGPT: what security leaders need to know in 2026

AI tools like FraudGPT have made convincing phishing attacks accessible to anyone with a subscription. Here's how the threat has evolved since 2023 and why the defense is behavioral, not technical.
Written By:
Natalia Bochan

TL;DR

FraudGPT is a subscription-based AI tool sold on the dark web that generates phishing emails, malicious code, and fake websites without ethical guardrails. First detected in 2023, it has evolved alongside AI broadly — attacks it powers are now faster, more personalized, and harder to detect. The risk for organizations is not primarily technical. It is behavioral: employees are the entry point these tools are designed to exploit.

Introduction

In July 2023, researchers at Netenrich identified FraudGPT being advertised across dark web marketplaces and Telegram channels. It was not a sophisticated tool by today's standards. What made it significant was the model it represented: AI capability, stripped of safety constraints, sold as a subscription service to anyone willing to pay.

By 2026, that model has matured. The barrier to launching a convincing social engineering attack has dropped significantly. Understanding what FraudGPT is, how it has evolved, and why it changes the calculus for human risk management is now a baseline requirement for security leaders.

What FraudGPT is

FraudGPT is a large language model fine-tuned for criminal use. Unlike commercial AI tools, it has no content moderation layer. It does not add warnings to outputs. It does not refuse requests to generate malicious content.

Its core capabilities include generating phishing emails, writing malicious code, creating fake login pages, and producing scam scripts tailored to specific targets. Subscription pricing starts at approximately $200 per month or $1,700 annually (Netenrich, July 2023, netenrich.com). A related tool, WormGPT, is optimized specifically for business email compromise (BEC) and is available at lower price points.

Trustwave testing documented a meaningful difference between FraudGPT and commercial models: while ChatGPT can produce phishing content under certain prompting conditions, it adds warnings and the output is less convincing. FraudGPT produces persuasive, warning-free content by design (Trustwave, 2023, trustwave.com).

How the threat has evolved since 2023

The original FraudGPT was notable for what it removed: ethical constraints. Tools since then have added capabilities on top of that foundation.

Current AI-assisted attacks combine several techniques. Deepfakes and voice cloning allow attackers to impersonate executives or trusted contacts in real time. Personalization engines analyze publicly available data to tailor messages to specific individuals. Multi-channel delivery means an attack may start as an email, continue via SMS, and conclude with a phone call, all coordinated and automated.

The result is that phishing emails no longer fail the grammar and spelling checks employees are trained to look for. Impersonation attacks are visually and behaviorally indistinguishable from legitimate communication. The attacker no longer needs technical expertise — they need a subscription and a target.

Why this is a human risk problem, not a technical one

FraudGPT and its successors do not bypass firewalls or exploit software vulnerabilities. They exploit human judgment under conditions of uncertainty, time pressure, and misplaced trust.

The 2024 Verizon Data Breach Investigations Report found that 68% of breaches involved a human element (Verizon DBIR 2024, verizon.com). AI-powered social engineering tools are designed precisely to increase the rate at which that human element fails. Better phishing emails produce higher click rates. More convincing impersonation produces more wire transfers. The technical sophistication of the attack is in service of a fundamentally human exploitation.

This means the defensive response cannot be purely technical either. Blocking known malicious domains helps. Spam filters help. But an employee who does not recognize the behavioral patterns of a social engineering attempt will be vulnerable regardless of what technology sits in front of them.

The reframe security leaders need

The instinct when new attack tools emerge is to update technical controls. That is necessary but insufficient here.

FraudGPT-class tools succeed because they manufacture plausibility. The defense is not better filters — it is employees who can recognize manipulation, verify requests through independent channels, and act on that recognition under pressure. That is a behavioral capability, not a technical one. It is developed through realistic simulation, reinforcement over time, and measurement of actual behavioral change, not completion rates.

Organizations that treat this as a training compliance problem will remain exposed. The question is not whether employees have completed a phishing awareness module. It is whether their behavior changes when they encounter a convincing, personalized, AI-generated attack.

What this means in practice

Three priorities follow from this for security leaders:

First, update your threat model. If your phishing simulations still use the kinds of messages that would have been realistic in 2020 — generic sender addresses, obvious grammar errors, generic lures — they are not testing the actual threat. Simulations need to reflect the quality of attacks employees will actually face.

Second, measure behavior, not completion. Awareness training completion is a compliance metric. Behavioral risk scoring — tracking how individuals respond to realistic simulations across multiple channels over time — is a security metric. These are not the same thing.

Third, treat verification behavior as a trainable skill. The most reliable defense against AI-generated impersonation is the habit of independent verification: confirming a request through a separate channel before acting. This behavior can be measured, reinforced, and improved.

Conclusion

FraudGPT is three years old. The tools it represents are not. They are actively developed, competitively priced, and accessible to attackers with no technical background. The organizations that respond by updating phishing templates and rerunning awareness training will see marginal improvement. Those that build a measurable behavioral defense layer will be materially better positioned.

The attack surface here is human. The defense has to match it.

Content
Act now before attackers do
Unify deepfake simulations, personalized training, and risk analytics into a single platform that builds measurable defense.
Talk to an expert