AI scams explained: how AI-powered fraud works and how enterprises detect it

Key insights

  • AI scams surged 1,210% in 2025, far outpacing the 195% growth in traditional fraud, and projected losses could reach $40 billion by 2027.
  • Seven distinct AI scam types now target enterprises, with deepfake video impersonation, AI voice cloning, and AI-powered Business Email Compromise (BEC) posing the highest organizational risk.
  • Traditional defenses are failing. AI-generated phishing eliminates the grammatical errors, generic messaging, and manual limitations that legacy email filters and awareness training relied on to catch fraud.
  • Behavioral detection fills the gap. Network detection and response (NDR) and identity threat detection and response (ITDR) catch the anomalous network, identity, and data-flow patterns that content-based security tools miss.
  • Layered verification is now mandatory. Dual-approval financial controls, out-of-band verification, and pre-shared code phrases reduce risk when any single communication channel can be synthetically replicated.

AI-powered fraud is no longer a theoretical risk. In 2024 alone, the FBI IC3 recorded $16.6 billion in cybercrime losses -- a 33% year-over-year increase -- with AI-enhanced social engineering driving a growing share of those incidents. A single deepfake video call cost engineering firm Arup $25.6 million. AI-generated phishing emails now achieve click-through rates more than four times higher than their human-crafted counterparts. And according to the World Economic Forum's Global Cybersecurity Outlook 2026, 73% of organizations were directly affected by cyber-enabled fraud in 2025.

This guide breaks down how AI scams work, the types security teams encounter most often, the latest loss data, and -- critically -- how enterprises detect and respond to AI-powered fraud when traditional defenses fall short.

What are AI scams?

AI scams are fraud schemes that use artificial intelligence -- including large language models, voice cloning, deepfake video generation, and autonomous AI agents -- to deceive victims at a scale and sophistication that was previously impossible, eliminating the human limitations that made traditional social engineering detectable and slow.

Where traditional scams depended on a human attacker's effort, language skills, and time, AI scams remove those constraints entirely. An attacker no longer needs fluency in the target's language. They no longer need to manually craft individualized messages. And they no longer need hours of preparation for a single attempt.

The 2026 International AI Safety Report found that the AI tools powering these scams are free, require no technical expertise, and can be used anonymously. That combination -- zero cost, zero skill, zero accountability -- explains why AI fraud is growing faster than any other threat category.

Beyond the direct financial losses, AI scams create a "truth decay" effect. As deepfake video, cloned voices, and AI-generated text become indistinguishable from authentic communications, organizations lose the ability to trust any digital interaction at face value. Every video call, voice message, and email becomes suspect.

How AI scams differ from traditional scams

The fundamental shift is speed and quality at scale. Traditional scams relied on human effort and contained detectable flaws -- misspellings, awkward phrasing, generic greetings. AI scams achieve human-quality output at machine speed.

Consider phishing as a baseline. According to IBM X-Force research, AI generates a convincing phishing email in five minutes. A human researcher crafting the same quality email manually takes 16 hours. That represents a 192x speed increase with equivalent or better quality -- meaning a single attacker can now produce in one day what previously required a team of specialists working for months.

The implications compound at scale. AI does not just match human quality. It personalizes each message using data scraped from LinkedIn profiles, corporate filings, and social media. A 2024 study by Brightside AI found that AI-generated phishing emails achieved a 54% click-through rate compared to 12% for traditional phishing -- a 4.5x effectiveness multiplier.

How AI scams work

Understanding the attacker's toolkit is essential for defenders. AI-powered fraud combines multiple technologies into a coordinated attack chain, with each stage leveraging different AI capabilities.

Voice cloning represents one of the most accessible attack vectors. Research from McAfee found that just three seconds of audio can create a voice clone with an 85% accuracy match. As Fortune reported in December 2025, voice cloning has crossed the "indistinguishable threshold" -- meaning human listeners can no longer reliably distinguish cloned voices from authentic ones.

Deepfake video generation has evolved from obvious fakes to real-time interactive avatars. New models maintain temporal consistency without the flicker, warping, or uncanny valley artifacts that earlier detection methods relied on. The Arup case demonstrated that deepfake video participants can fool experienced professionals in live calls.

LLM-powered phishing uses large language models to generate hyper-personalized emails that reference specific organizational details, recent transactions, and individual communication styles. These AI-powered phishing attacks lack the telltale signs that legacy email filters were trained to catch.

Autonomous scam agents represent the latest evolution. According to Group-IB's 2026 research, AI-powered scam call centers now combine synthetic voices, LLM-driven coaching, and inbound AI responders to run fully automated fraud operations at scale.

The AI scam toolchain

AI scam toolchains now combine voice cloning, deepfake video, and dark LLMs into commoditized services costing less than a streaming subscription.

The typical AI scam attack follows five stages:

  1. Reconnaissance -- Attackers scrape public data (social media, corporate filings, conference recordings) to build target profiles and collect voice and video samples.
  2. AI content generation -- Using dark LLMs, voice cloning services, and deepfake generators, attackers create personalized phishing emails, synthetic voice messages, or deepfake video.
  3. Delivery -- AI-generated content reaches targets via email, phone calls, video conferencing platforms, messaging apps, or social media.
  4. Exploitation -- Victims act on the fraudulent communication by transferring funds, sharing credentials, approving access, or installing malicious applications.
  5. Monetization -- Stolen funds move through cryptocurrency exchanges, money mules, or fraudulent investment platforms.

Figure: AI scam attack flow. Five sequential stages from reconnaissance through monetization, with AI content generation at the center. Each node represents a discrete phase; edges show progression from data collection to financial extraction. Alt text: A five-stage linear process diagram showing how AI scams progress from reconnaissance to AI content generation, delivery, exploitation, and monetization.

The economics fuel the growth. Group-IB documented synthetic identity kits available for approximately $5 and dark LLM subscriptions ranging from $30--$200 per month. By the end of 2025, an estimated eight million deepfakes existed online -- up from roughly 500,000 in 2023 -- representing approximately 900% annual growth.

The barriers to entry have effectively disappeared. Anyone with internet access and a small budget can now launch AI-powered social engineering campaigns that would have required state-level resources just five years ago.

Types of AI scams

AI scams now span seven distinct attack vectors, with deepfake video, voice cloning, and AI-powered BEC posing the greatest enterprise risk. The following taxonomy covers both consumer-targeted and enterprise-targeted variants.

Table: AI scam types taxonomy with enterprise risk assessment. Caption: Common AI scam types, their attack methods, primary targets, enterprise risk levels, and recommended detection approaches.

Scam type Attack method Primary target Enterprise risk level Detection approach
Deepfake video scams AI-generated video impersonating executives in calls or ads Enterprises, consumers Critical Behavioral analytics, out-of-band verification
AI voice cloning (vishing) Cloned voice used in phone calls impersonating executives or family Enterprises, consumers High Voice biometric anomaly detection, callback verification
AI-generated phishing LLM-crafted hyper-personalized emails at scale Enterprises, consumers High Behavioral email analysis, NDR, identity monitoring
AI-powered BEC Multimodal BEC combining email, voice, and video impersonation Enterprises Critical Behavioral analytics, dual-approval controls, ITDR
Synthetic identity fraud AI-generated fake identities combining real and fabricated data Financial services, HR High Identity analytics, data breach monitoring
AI investment and crypto scams AI-generated "experts" and fake trading platforms Consumers, retail investors Medium Regulatory verification, platform authentication
AI romance scams (pig butchering) LLM-powered emotionally intelligent bots at scale Consumers Medium Behavioral pattern recognition, platform reporting

Deepfake video scams have surged 700% in 2025 according to ScamWatch HQ, with Gen Threat Labs detecting 159,378 unique deepfake scam instances in Q4 2025 alone. Enterprise variants include executive impersonation in video calls (as in the Arup case), deepfake ads impersonating financial executives, and deepfake job candidates used by DPRK operatives.

AI voice cloning and vishing attacks now exceed 1,000 AI scam calls per day at major retailers. Beyond consumer targeting, attackers use cloned executive voices to authorize fraudulent wire transfers and impersonate government officials in social engineering campaigns.

AI-generated phishing and spear phishing has reached a tipping point. Analysis from KnowBe4 and SlashNext indicates that 82.6% of phishing emails now contain some AI-generated content, while Hoxhunt reports that 40% of BEC emails are primarily AI-generated. The difference between these figures likely reflects "any AI assistance" versus "fully AI-generated" methodologies.

AI-powered Business Email Compromise drove $2.77 billion in losses across 21,442 incidents in 2024 according to FBI IC3. AI is transforming BEC from email-only attacks into multimodal campaigns combining email, voice, and video to create deeply convincing impersonations.

AI investment and cryptocurrency scams are scaling rapidly. The Check Point "Truman Show" operation deployed 90 AI-generated "experts" in controlled messaging groups, directing victims to install mobile apps with server-controlled trading data. Chainalysis reported $14 billion in crypto scam losses in 2025, with AI-enabled scams proving 4.5x more profitable than traditional fraud.

AI romance scams use large language models to maintain emotionally intelligent conversations at scale. Experian's 2026 Future of Fraud Forecast identifies AI-powered emotionally intelligent bots as a top emerging threat, capable of sustaining dozens of simultaneous "relationships" while adapting tone and personality to each target.

Enterprise-targeted AI scams

Organizations face a concentrated subset of AI scam types that exploit trust relationships and authorization workflows.

Executive impersonation via deepfake targets the highest-value transactions. The Arup incident -- where a finance employee was deceived by an all-deepfake video call including the apparent CFO, resulting in 15 separate transactions totaling $25.6 million -- remains the most prominent case. It was discovered only through manual corporate headquarters verification.

Deepfake job candidates represent an emerging and persistent threat. The FBI, DOJ, and CISA have documented DPRK IT worker schemes affecting 136 or more US companies, with operatives earning $300,000+ per year and escalating to data extortion. Gartner predicts one in four candidate profiles could be fake by 2028.

AI-enhanced spear phishing at scale targets entire industry verticals. Brightside AI documented a campaign targeting 800 accounting firms with AI-generated emails referencing specific state registration details, achieving a 27% click rate -- far above the industry average for phishing campaigns.

AI scams by the numbers: 2024--2026 statistics

AI-enabled fraud surged 1,210% in 2025, with projected losses reaching $40 billion by 2027 as AI tools democratize social engineering at scale.

Table: AI scam and deepfake fraud statistics, 2024--2026. Caption: Key financial loss, attack volume, and prevalence metrics for AI-powered fraud from authoritative sources.

Metric Value Source Year
Total US cybercrime losses reported to FBI IC3 $16.6 billion (33% YoY increase) FBI IC3 Annual Report 2024
Projected generative AI-enabled fraud losses $40 billion by 2027 (32% CAGR from $12.3B in 2023) Deloitte Center for Financial Services 2024
AI-enabled fraud growth vs. traditional fraud 1,210% AI-enabled vs. 195% traditional Pindrop via Infosecurity Magazine 2026
Organizations affected by cyber-enabled fraud 73% WEF Global Cybersecurity Outlook 2026 2026
BEC losses reported to FBI IC3 $2.77 billion across 21,442 incidents FBI IC3 Annual Report 2024
Global scam losses (all types) $442 billion extracted; 57% of adults surveyed were scammed GASA via ScamWatch HQ 2025
Crypto scam losses $14 billion; AI-enabled scams 4.5x more profitable Chainalysis via PYMNTS 2025
Deepfakes online ~8 million (up from ~500,000 in 2023) DeepStrike via Fortune 2025
Deepfake scam instances (Q4 2025 alone) 159,378 unique instances Gen Threat Labs 2026
Deepfake incident damages (Q2 2025 alone) $350 million Group-IB 2026
People who encountered AI voice scams 1 in 4 (77% of victims lost money) McAfee survey of 7,000 people via NCOA 2025
Contact center fraud exposure $44.5 billion Pindrop via Infosecurity Magazine 2026
Leaders reporting rising AI-related vulnerabilities 87% WEF Global Cybersecurity Outlook 2026 2026
Companies reporting increased fraud losses (2024--2025) Nearly 60% Experian via Fortune 2026

Note on data scope: The FBI IC3 figure ($16.6 billion) represents only complaints reported to US law enforcement and should be considered a floor. The GASA figure ($442 billion) represents a global estimate including unreported losses based on a survey of 46,000 adults across 42 countries. Both are accurate for their respective methodologies and scope.

These numbers map directly to organizational cybersecurity metrics that CISOs need for board-level reporting and investment justification.

AI scams in the enterprise: real-world case studies

Enterprise AI scam losses span from $25.6 million single-incident deepfake fraud to billions in annual BEC losses, with cyber fraud now overtaking ransomware as the top CEO concern.

The WEF Global Cybersecurity Outlook 2026 revealed a striking priority disconnect: cyber-enabled fraud overtook ransomware as the top concern for CEOs in 2026, yet ransomware remains the primary focus for most CISOs. Seventy-two percent of leaders identified AI fraud as a top operational challenge, with 87% reporting rising AI-related vulnerabilities.

Arup deepfake video call -- $25.6 million

In January 2024, a finance employee at Arup's Hong Kong office was invited to a video call with what appeared to be the company's CFO and several colleagues. Every participant was a deepfake, generated from publicly available conference footage. The employee authorized 15 separate wire transfers totaling $25.6 million (200 million HKD). The fraud was discovered only when the employee later verified with corporate headquarters through a separate channel.

Lesson learned: Video calls alone cannot be trusted for financial authorization. Organizations must implement out-of-band verification and dual-approval controls for high-value transactions.

DPRK deepfake job candidates

The FBI has documented DPRK IT worker schemes affecting 136 or more US companies. Operatives use deepfake technology to pass video interviews, then earn $300,000+ per year while funneling revenue to North Korea's weapons programs. Some have escalated to data extortion, threatening to release stolen proprietary information. Gartner projects that one in four candidate profiles could be fake by 2028.

Check Point "Truman Show" investment fraud

In January 2026, Check Point researchers exposed an operation using 90 AI-generated "experts" to populate controlled messaging groups. Victims were directed to install a mobile application -- available on official app stores -- that displayed server-controlled trading data showing fabricated returns. The attackers created an entirely synthetic reality to maintain the fraud.

Industry-specific targeting patterns

Different industries face distinct AI scam profiles. Financial services organizations see concentrated BEC, wire fraud, and contact center fraud. One US healthcare provider reported that more than 50% of inbound traffic consisted of bot-driven attacks. Major retailers report receiving more than 1,000 AI-generated scam calls per day. Technology and IT staffing firms face the greatest exposure to deepfake job candidates.

According to Cyble, 30% of high-impact corporate impersonation incidents in 2025 involved deepfakes -- confirming that AI-generated synthetic media has moved from a novelty to a core component of enterprise-targeted fraud. Effective incident response planning must now account for these AI-enabled attack vectors.

Detecting and preventing AI scams

Enterprise AI scam defense requires layered detection spanning behavioral analytics, identity monitoring, and network analysis because AI-generated content increasingly bypasses content-based security controls.

Here is an ordered framework for enterprise AI scam defense:

  1. Deploy behavioral analytics and NDR -- Network detection and response identifies anomalous network patterns associated with AI scam infrastructure, including command-and-control communications, voice synthesis traffic, and unusual data flows.
  2. Implement identity threat detection -- Identity threat detection and response (ITDR) flags anomalous authentication patterns, unusual access requests, and behavioral deviations that indicate compromised or synthetic identities.
  3. Require layered verification controls -- Mandate dual-approval for financial transactions through separate communication channels. Establish pre-shared verification phrases for emergency communications. Verify all high-value requests through out-of-band channels.
  4. Upgrade security awareness training -- Shift training focus from spotting grammatical errors to recognizing psychological manipulation, urgency framing, and unusual request contexts. IBM's research on AI social engineering confirms that traditional "spot the typo" training is now ineffective.
  5. Deploy AI-enhanced email security -- Use ML-based email filtering that analyzes behavioral patterns rather than content signatures alone. Microsoft Cyber Signals Issue 9 details how behavioral analysis catches what signature-based tools miss.
  6. Implement multi-factor authentication everywhere -- Ensure MFA covers all access points, with phishing-resistant methods (FIDO2, hardware tokens) prioritized for high-privilege accounts.
  7. Accept deepfake detection limitations -- Content-based deepfake detection is increasingly unreliable as generation quality improves. Gartner predicts that by 2026, 30% of enterprises will find standalone identity verification solutions unreliable in isolation. Behavioral threat detection provides a critical complementary layer.

MITRE ATT&CK mapping for AI scam threats

Mapping AI scam techniques to the MITRE ATT&CK framework helps GRC teams and security architects integrate AI fraud risks into existing threat models.

Table: MITRE ATT&CK techniques relevant to AI-powered scams. Caption: Mapping of AI scam attack methods to MITRE ATT&CK techniques with detection guidance.

Tactic Technique ID Technique name AI scam relevance Detection approach
Initial Access T1566 Phishing AI-generated spear phishing emails (T1566.001, T1566.002), AI phishing via messaging services (T1566.003), AI voice cloning vishing (T1566.004) Behavioral email analysis, NDR anomaly detection, voice biometric monitoring
Reconnaissance T1598 Phishing for Information AI-enhanced information gathering via phishing services (T1598.001), links (T1598.003), and voice calls (T1598.004) Network traffic analysis, identity monitoring, behavioral analytics
Resource Development T1588.007 Obtain Capabilities: Artificial Intelligence Acquisition of dark LLMs, voice cloning tools, deepfake generators, and autonomous scam agents Threat intelligence, dark web monitoring, AI tool marketplace tracking

Regulatory landscape for AI fraud

The regulatory environment for AI fraud is tightening rapidly.

  • NIST Cyber AI Profile (IR 8596) -- Published December 16, 2025, this draft framework addresses three areas directly applicable to AI scam defense: securing AI system components, conducting AI-enabled cyber defense, and thwarting AI-enabled cyberattacks (NIST IR 8596).
  • FTC enforcement actions -- The FTC has pursued multiple actions against AI-powered fraud schemes, including cases involving fake AI investment tools that defrauded consumers of at least $25 million.
  • White House AI executive order -- The December 2025 executive order on "Ensuring a National Policy Framework for Artificial Intelligence" establishes a federal AI regulatory framework and creates an AI Litigation Task Force.
  • State AI laws -- Multiple state laws took effect January 1, 2026, including regulations in California and Texas. Colorado S.B. 24-205, addressing AI governance obligations, takes effect in June 2026.

Modern approaches to AI scam defense

The industry is converging on a defense paradigm that uses AI to counter AI. Current approaches include behavioral analytics, identity threat detection, network traffic analysis, AI-powered email security, deepfake detection tools, and evolved security awareness training platforms.

Several trends are shaping the landscape. Unified detection across network, cloud, identity, and SaaS surfaces is replacing siloed tools that only monitor a single attack surface. Real-time interactive deepfakes present challenges that static content analysis cannot solve. And agentic AI -- autonomous AI systems acting on behalf of users -- introduces new fraud vectors where machines manipulate other machines.

Investment signals confirm the urgency. Adaptive Security raised $146.5 million in total funding including OpenAI's first cybersecurity investment, focused specifically on AI-powered social engineering defense. Ninety-four percent of leaders surveyed by WEF expect AI to be the most significant cybersecurity force in 2026.

Key dates for defenders to track: the FTC policy statement deadline on March 11, 2026; the expected FBI IC3 2025 annual report in April 2026; and Colorado S.B. 24-205 implementation in June 2026.

How Vectra AI approaches AI scam detection

Vectra AI's approach centers on detecting the network and identity behaviors that indicate AI-powered scam campaigns have progressed beyond the initial social engineering stage. By monitoring for anomalous command-and-control communications, unusual authentication patterns, and data exfiltration flows associated with AI fraud infrastructure, Attack Signal Intelligence fills the gap where content-based defenses and human judgment increasingly fail against AI-generated attacks. This maps to the assume-compromise philosophy: finding attackers already inside the environment is more reliable than preventing every AI-enhanced social engineering attempt at the perimeter.

Conclusion

AI scams represent the fastest-growing fraud category in cybersecurity, driven by tools that are free, accessible, and anonymous. The data is unambiguous: 1,210% growth in AI-enabled fraud, $40 billion in projected losses by 2027, and 73% of organizations already affected.

The enterprises that defend successfully against this threat share common characteristics. They deploy layered detection across network, identity, and email surfaces rather than relying on any single control. They implement dual-approval financial workflows that do not trust any single communication channel. They train their teams to recognize psychological manipulation patterns rather than grammatical errors. And they accept the assume-compromise reality: AI-generated social engineering will sometimes succeed, making rapid detection and response as critical as prevention.

For security teams evaluating their readiness, the framework is clear. Map your AI scam exposure against the MITRE ATT&CK techniques documented above. Assess whether your current detection stack covers network behavioral anomalies, identity threats, and email-based attacks. And ensure your incident response playbooks account for deepfake, voice cloning, and AI-generated phishing scenarios.

Explore how Vectra AI's platform detects the network and identity behaviors that indicate AI-powered scam campaigns -- catching what content-based defenses miss.

Related cybersecurity fundamentals

FAQs

How can you tell if a video is a deepfake?

What should you do if you are targeted by an AI scam?

Can AI scam you on the phone?

What is pig butchering?

How do AI romance scams work?

What is synthetic identity fraud?

Can AI create fake websites?