In 2025, a single social engineering attack against a cryptocurrency exchange resulted in $1.5 billion in losses — the largest theft in crypto history. The attacker did not exploit a software vulnerability. They impersonated a trusted open-source contributor, earned a developer's trust, and walked through the front door. This is the reality of modern social engineering. According to Unit 42's 2025 Global Incident Response Report, 36% of all incidents began with social engineering as the initial access vector. The Verizon 2025 Data Breach Investigations Report found that 60% of breaches involved the human element. Attackers are not breaking in — they are being let in.
This guide covers what social engineering is, how it works, the AI-powered techniques redefining the threat in 2025 and 2026, real-world case studies, and how organizations can detect and respond to attacks that bypass every technical control.
Social engineering is the use of psychological manipulation to trick people into performing actions or divulging confidential information, exploiting human trust rather than technical vulnerabilities. In cybersecurity, the term encompasses every attack that targets human decision-making — from a phishing email impersonating an executive to a phone call from a fake IT help desk. The NIST glossary defines it as "an attempt to trick someone into revealing information that can be used to attack systems or networks."
What makes social engineering uniquely dangerous is its position as an umbrella category. It includes digital attacks like phishing, voice-based vishing, text-based smishing, and physical techniques like tailgating. It also includes hybrid attacks that chain multiple channels together — an approach that has become the norm in 2025 and 2026.
The numbers are stark. Unit 42's 2025 incident response data shows that social engineering was the initial access vector in 36% of all incidents they investigated. Verizon's 2025 DBIR found the human element present in 60% of breaches. These are not legacy statistics from a decade ago. They reflect the current threat landscape, where even organizations with mature security programs remain vulnerable because their people are the attack surface.
Social engineering is distinct from hacking in an important way. Hacking exploits technical vulnerabilities in systems and software. Social engineering exploits trust, authority, fear, and urgency in people. In practice, most modern attacks combine both. An attacker uses social engineering to obtain credentials, then uses technical exploitation to move laterally. This is why organizations need both prevention and detection — and why insider threats are deeply intertwined with social engineering defense.
Social engineering attacks succeed because they exploit well-documented psychological principles. Robert Cialdini's six principles of influence provide a useful framework for understanding why these attacks work.
These triggers bypass rational decision-making. The Verizon 2025 DBIR found that the median time from a phishing email landing in an inbox to a user clicking the malicious link is just 21 seconds — with data entry beginning 28 seconds later. Technical controls alone cannot compensate for decisions made in under half a minute.
Social engineering attacks follow a predictable lifecycle. Understanding this lifecycle is critical for defenders because it reveals multiple points where attacks can be detected and interrupted.

Consider how the Scattered Spider group operationalized this lifecycle in their 2025 retail campaign. During reconnaissance, they identified IT help desk procedures at major UK retailers including M&S, Co-op, and Harrods. They developed pretexts as employees needing password resets. They engaged help desk staff by phone, using employee details scraped from LinkedIn and corporate directories. The exploitation phase involved obtaining password resets and MFA enrollment changes. From there, they moved laterally through corporate networks, ultimately deploying ransomware with an estimated $300 million in combined impact.
Beyond the lifecycle, attackers employ specific tactical patterns that defenders should recognize.
Authority exploitation remains the most effective trigger. Attackers impersonate C-suite executives, IT departments, legal counsel, and regulatory bodies. Urgency creation follows closely — fabricated deadlines, fake security alerts, and time-limited offers all force targets into rapid action without verification.
Fear appeals have grown more sophisticated. Rather than crude threats, modern attackers reference real security incidents, genuine compliance deadlines, or actual organizational changes to make their scenarios believable.
AI has fundamentally amplified every one of these triggers. Where a human attacker could craft a few dozen personalized pretexts per day, AI-powered tools generate thousands of contextually relevant, grammatically perfect messages in minutes. This shift from craft to industrial scale is the defining change in the 2025--2026 threat landscape.
Social engineering encompasses more than a dozen distinct attack types. Each exploits different trust vectors and delivery channels. The following catalog covers the major categories, with links to dedicated deep-dive pages where available.
Phishing is the most prevalent form of social engineering. It uses deceptive emails, messages, or websites to trick victims into revealing credentials or installing malware. For a comprehensive breakdown, see phishing.
Spear phishing targets specific individuals or organizations with personalized content derived from reconnaissance. See spear phishing for detailed coverage.
Vishing (voice phishing) uses phone calls to manipulate targets. Attackers impersonate IT help desks, bank representatives, or executives to extract credentials or authorize actions. The 2026 CarGurus breach demonstrated vishing's potency — a single voice call yielded SSO credentials that led to 12.4 million records being exfiltrated. Vishing has been professionalized through organized groups recruiting callers at $500 to $1,000 per call (The Hacker News, 2026). For more on this attack type, see vishing.
Smishing (SMS phishing) delivers social engineering via text messages containing malicious links or urgent prompts. Mobile devices present a smaller screen that makes URL inspection difficult, increasing click rates. See smishing for a deeper look.
Pretexting involves creating a fabricated scenario to trick a target into providing information or access. Unlike phishing, which often relies on a single message, pretexting typically involves sustained interaction and relationship building. The Bybit heist (2025) was a pretexting operation where the attacker spent 20 days posing as a trusted contributor before executing the theft.
Baiting offers something enticing — a free USB drive, a download, exclusive content — to lure victims into compromising their systems. Digital baiting now includes fake software updates and AI tool installers targeting developers.
Tailgating and piggybacking are physical social engineering techniques where an unauthorized person follows an authorized individual through a secure entrance. These remain relevant in corporate environments, particularly data centers and secure facilities.
Quid pro quo attacks offer a service or benefit in exchange for information. A common example involves attackers posing as technical support offering to fix a problem in exchange for login credentials.
Watering hole attacks compromise websites frequently visited by the target group, turning trusted resources into infection vectors.
Business email compromise (BEC) involves impersonating or compromising business email accounts to authorize fraudulent transfers or redirect payments. The FBI IC3's 2024 report recorded $2.8 billion in BEC losses and 193,407 phishing and spoofing complaints in 2024 alone.
Scareware uses fake security alerts to convince victims their systems are infected, driving them to install malicious software or pay for unnecessary services.
Caption: Common social engineering attack types and how to identify them.
AI has transformed social engineering from a craft practiced by skilled individuals into a scalable industry. This section covers the techniques that no competitor page in the top 10 search results adequately addresses — and the ones that security teams need to understand right now.
AI-generated phishing at scale. Research indicates that 82.6% of phishing emails now incorporate AI-generated content (2025). AI eliminates the grammatical errors and awkward phrasing that once served as reliable detection signals. The Anti-Phishing Working Group recorded over one million phishing attacks in Q1 2025 alone. AI-driven phishing is no longer an emerging trend — it is the baseline.
Phishing-as-a-Service (PhaaS). Subscription platforms costing approximately $200 per month provide AI-generated templates, real-time credential interception via adversary-in-the-middle (AiTM) techniques, and custom phishing kits that sync with live voice calls to bypass multi-factor authentication. Only phishing-resistant authentication methods (FIDO2/passkeys) are effective against these coordinated attacks.
ClickFix campaigns surged 517% in 2025, making them one of the fastest-growing social engineering techniques in the current landscape (Cloud Range Cyber, 2026). The technique tricks users into copying and executing malicious commands, typically by displaying fake browser error messages or update prompts.
In 2026, ClickFix evolved to use DNS-based payload delivery (The Hacker News, 2026), making detection significantly harder. A developer-targeting variant called InstallFix mimics AI tool installers, with at least 20 campaigns targeting AI tools observed in February and March 2026.
From a defensive perspective, organizations should monitor for anomalous DNS TXT record queries and implement endpoint behavioral analysis that detects clipboard-to-command-line execution patterns. The key detection signal is not the social engineering itself but the post-compromise behavior that follows.
Deepfake files grew from 500,000 in 2023 to over eight million in 2025 (Cloud Range Cyber, 2026). Voice cloning technology now requires just three seconds of audio to produce a convincing replica, and research indicates that 70% of people cannot distinguish cloned voices from real ones. Industry projections estimate deepfake-related losses will reach $40 billion by 2027.
The Arup $25 million deepfake case (2024) illustrates the threat. Attackers created deepfake video representations of multiple executives during a live video conference call, convincing a finance employee to authorize wire transfers. Video and voice are no longer reliable identity confirmation methods for high-value transactions.
AI scams surged 1,210% in 2025. The vishing-as-a-service model has professionalized these attacks further. The SLH supergroup — formed from the merger of Scattered Spider, Lapsus$, and ShinyHunters — actively recruits vishers at $500 to $1,000 per call (The Hacker News, 2026). These callers use custom phishing kits synced with live conversations to intercept MFA tokens in real time.
Agentic AI social engineering represents the next frontier. Security researchers predict that autonomous AI agents will run full phishing campaigns — from target selection through credential harvesting — without human input by late 2026. The SecurityWeek Cyber Insights 2026 analysis details how these autonomous capabilities are expected to reshape the threat landscape.

The following case studies from 2024 through 2026 demonstrate how social engineering techniques translate into real-world impact. Each incident carries specific defensive lessons.
Bybit cryptocurrency heist — $1.5 billion (February 2025). North Korea's Lazarus Group socially engineered a Safe{Wallet} developer by posing as a trusted open-source contributor (SecurityWeek, 2025). The attacker maintained access for 20 days before manipulating a multisignature wallet transaction. Chainalysis confirmed this as the largest cryptocurrency theft in history. The lesson: supply chain trust must be continuously verified, and contributor access requires behavioral monitoring.
Scattered Spider / SLH retail campaign — ~$300 million (2025). The group targeted M&S, Co-op, and Harrods through IT help desk impersonation, obtaining password resets and MFA changes that led to ransomware deployment (CmdZero, 2025). The FBI issued warnings about the group expanding to target airlines (The Hacker News, 2025). The lesson: help desk procedures need out-of-band identity verification for all password resets and MFA changes, as recommended by CISA advisory AA23-320A.
CarGurus vishing breach — 12.4 million records (January 2026). ShinyHunters used voice phishing to obtain SSO credentials from a CarGurus employee, exfiltrating 12.4 million customer records (BleepingComputer, 2026). The lesson: a single compromised credential from a vishing call can cascade into a massive data breach.
Coinbase insider bribery (2025). Criminals bribed overseas support staff to leak customer data — demonstrating that social engineering extends beyond deception to include financial inducement. The lesson: insider threat monitoring and access controls must cover outsourced and offshore teams.
Signal and WhatsApp diplomatic targeting (2026). Russia-linked actors compromised secure messaging accounts of diplomats and journalists, exploiting trust in encrypted platforms. The lesson: even secure channels are vulnerable when account access relies on social engineering.
The pattern across these incidents is clear. Help desk procedures need out-of-band identity verification. Video and voice are no longer reliable identity confirmation methods. Insider threat detection is part of the social engineering defense model. And supply chain trust must be continuously verified — not assumed.
The financial scale is unprecedented. The United States lost $16.6 billion to social engineering in 2024, a 33% increase year-over-year. The average global cost of a data breach reached $4.88 million in 2024 (Ponemon Institute). BEC alone caused $2.8 billion in reported losses in 2024 (FBI IC3).
Caption: High-profile social engineering attacks and their defensive takeaways.
Most cybersecurity content on social engineering focuses exclusively on prevention — awareness training, email filters, and policies. Prevention matters, but it is insufficient. The assume-compromise philosophy recognizes that skilled attackers will eventually succeed in manipulating someone. The question becomes: how quickly can you detect and contain the post-compromise activity?
For employees, social engineering red flags include unexpected urgency, authority claims from unknown contacts, unusual requests that bypass normal procedures, and resistance to verification. Training people to recognize these signals has value, but the data on effectiveness is mixed. Training vendors claim that security awareness programs can reduce the phish-prone rate from approximately 30% to under 5%. However, the Verizon 2025 DBIR — an independent, multi-source study — found that phishing click rates remained "unaffected by training." The reality likely sits between these positions. Training is one layer in a defense-in-depth strategy, not a standalone solution.
For security teams, the critical detection signals come after a successful social engineering attack. The Verizon 2025 DBIR found that 85% of social engineering breaches result in credential theft. This means the post-compromise indicators that matter most include anomalous access patterns, unusual identity threat detection and response signals, impossible travel between locations, abnormal privilege escalation, and unexpected lateral movement across the network.
Organizations seeking additional guidance should review CISA's guidance on avoiding social engineering and phishing attacks.
Social engineering maps to specific controls across major compliance and security frameworks. GRC teams can use these mappings to structure their programs and provide audit evidence.
Caption: Social engineering controls across major compliance frameworks.
The cybersecurity industry is converging on a multi-layered approach to social engineering defense. Current solutions include behavioral analytics platforms that detect post-compromise activity, ITDR tools that monitor for credential misuse, zero trust architectures that limit blast radius, and phishing-resistant authentication that eliminates the credential theft vector entirely.
Emerging trends from RSAC 2026 point toward behavioral science integration — applying psychological research to improve both training and detection. The Humanix innovation sandbox finalist demonstrated a people-oriented approach to social engineering detection that treats human behavior as a data source rather than a weakness (SecurityWeek, 2026). The signal-over-noise imperative is also gaining traction. Organizations are moving from alert fatigue to actionable threat signals, prioritizing the behavioral indicators that reveal real attacks rather than flooding analysts with low-fidelity alerts.
Social engineering attacks that succeed result in anomalous identity behavior, lateral movement, and privilege escalation — exactly the post-compromise signals that Attack Signal Intelligence is designed to surface. Vectra AI's assume-compromise philosophy treats successful social engineering as inevitable and focuses on reducing dwell time through behavioral detection across identity, network, and cloud surfaces. The goal is not to prevent every social engineering attempt but to detect the attacker activity that follows within minutes rather than months.
The social engineering threat landscape is evolving faster than at any point in cybersecurity history. Over the next 12 to 24 months, organizations should prepare for several critical developments.
Agentic AI will automate full attack chains. Security researchers predict that by late 2026, autonomous AI agents will execute complete social engineering campaigns — from target selection and OSINT gathering through credential harvesting and initial exploitation — without human involvement. This represents a fundamental shift from tool-assisted human attacks to fully autonomous operations.
Deepfake capabilities will become commodity tools. With projected losses reaching $40 billion by 2027 and deepfake files already exceeding eight million (2025), the technology is rapidly democratizing. Organizations should implement multi-channel verification for any transaction involving video or voice confirmation, and invest in detection tools that analyze media authenticity.
Regulatory pressure will intensify. NIS2 enforcement across the European Union is creating new incident reporting obligations that directly affect social engineering response timelines. Germany's BSI registration deadline of March 2026 signals broader compliance expectations. Organizations should map their social engineering defenses to framework controls now rather than scrambling to comply later.
The vishing-as-a-service economy will mature. The SLH supergroup's recruitment model demonstrates that social engineering is following the same as-a-service trajectory as ransomware. Expect professionalized call centers, specialized phishing kit developers, and tiered service offerings to become the norm. Help desk hardening and out-of-band verification procedures are the most direct countermeasures.
Identity will become the primary battleground. With 85% of social engineering breaches resulting in stolen credentials (Verizon 2025 DBIR), the post-compromise identity layer is where detection matters most. Organizations should prioritize ITDR capabilities, behavioral analytics, and phishing-resistant authentication as their top social engineering defense investments for 2026 and 2027.
Social engineering is not a new problem, but it is a fundamentally transformed one. AI has industrialized deception, making it faster, cheaper, and harder to distinguish from legitimate communication. The case studies from 2024 through 2026 demonstrate that social engineering now causes billion-dollar losses, targeting everyone from help desk staff to C-suite executives to open-source developers.
Prevention remains important — phishing-resistant authentication, out-of-band verification, and awareness training all reduce the attack surface. But the organizations best positioned to survive social engineering attacks in 2025 and 2026 are those that have embraced the assume-compromise mindset. They invest in behavioral analytics, identity monitoring, and post-compromise detection because they understand that someone will eventually be tricked.
The question is not whether social engineering will target your organization. It is whether your detection and response capabilities will find the attacker before they find what they came for.
To learn how behavioral detection and Attack Signal Intelligence surface the post-compromise signals that follow social engineering attacks, explore the Vectra AI platform.
Yes. Social engineering is a category of cyber attack that targets human psychology rather than technical systems. It is recognized as one of the leading initial access vectors in cybersecurity, accounting for 36% of incidents according to Unit 42's 2025 Global Incident Response Report. Unlike traditional hacking, which exploits software or hardware vulnerabilities, social engineering exploits human decision-making — trust, fear, urgency, and authority. Major frameworks including MITRE ATT&CK classify social engineering techniques as attack methods, with dedicated technique IDs for phishing (T1566), phishing for information (T1598), and user execution (T1204). The FBI IC3's 2024 report recorded over 193,000 phishing and spoofing complaints, reinforcing social engineering's status as one of the most active categories of cyber attack globally.
Social engineering manipulates people into revealing information or performing actions, while hacking exploits technical vulnerabilities in systems and software. In practice, most modern attacks combine both approaches. An attacker may use a vishing call to obtain help desk credentials (social engineering), then use those credentials to move laterally through the network and deploy ransomware (technical exploitation). The Scattered Spider retail campaign of 2025 exemplifies this hybrid approach. The group used phone-based social engineering to compromise help desk accounts, then leveraged technical tools for lateral movement and ransomware deployment, causing an estimated $300 million in damages. The distinction matters for defense because it means organizations need both human-layer controls (training, verification procedures) and technical-layer controls (behavioral analytics, endpoint detection, network monitoring).
Yes. Phishing is the most common type of social engineering attack. It uses deceptive emails, messages, or websites to trick victims into revealing credentials, installing malware, or performing unauthorized actions. Phishing accounted for 193,407 complaints to the FBI IC3 in 2024, and the Verizon 2025 DBIR found that phishing or pretexting was involved in 57% of social engineering breaches by external actors. While phishing is the most prevalent form, social engineering is a broader category that also includes vishing (voice calls), smishing (SMS messages), pretexting, baiting, tailgating, and other techniques. Understanding phishing as a subset of social engineering is important because attackers increasingly chain multiple social engineering methods together — for example, sending a phishing email followed by a vishing call to create urgency around clicking a link.
Phishing is the most commonly used type of social engineering, with 193,407 complaints reported to the FBI IC3 in 2024 and over one million attacks recorded by the Anti-Phishing Working Group in Q1 2025 alone. Beyond phishing, the most commonly observed types include pretexting (including business email compromise, which caused $2.8 billion in losses in 2024), vishing (voice phishing, which was used in the 2026 CarGurus breach affecting 12.4 million records), and baiting (offering enticing downloads or physical media). The relative prevalence of each type shifts with attacker innovation. In 2025 and 2026, vishing has seen significant growth due to AI-powered voice cloning and professionalized vishing-as-a-service operations, while ClickFix campaigns — which trick users into executing malicious commands — surged 517% in 2025.
Social engineering is the practice of tricking people into sharing confidential information or taking actions that compromise security. It relies on human psychology — trust, fear, urgency — rather than technical exploits. A simple example: an attacker calls an employee pretending to be from the IT department, claims there is a security emergency, and asks the employee to share their password. The employee complies because the caller sounds authoritative and the situation feels urgent. Social engineering works because humans are wired to respond to authority, help others, and act quickly under pressure. Attackers exploit these natural tendencies across every communication channel — email, phone, text messages, social media, and even in-person interactions. In cybersecurity, social engineering is considered one of the most dangerous attack categories because even the most sophisticated technical defenses cannot prevent an authorized user from voluntarily granting access.
The financial impact of social engineering is substantial and accelerating. The United States lost $16.6 billion to social engineering in 2024, a 33% increase from $12.5 billion in 2023 (FBI IC3). The average global cost of a data breach reached $4.88 million in 2024 (Ponemon Institute). BEC alone caused $2.8 billion in reported losses in 2024 (FBI IC3). Individual incidents can be catastrophic — the Bybit cryptocurrency heist in February 2025 resulted in $1.5 billion in losses from a single social engineering operation targeting one developer. The Scattered Spider retail campaign against M&S, Co-op, and Harrods generated an estimated $300 million in combined impact. Beyond direct financial losses, organizations face regulatory fines, reputational damage, customer churn, and operational disruption. Industry projections estimate deepfake-related losses alone will reach $40 billion globally by 2027.
Social engineering attacks rely on psychological principles that bypass rational decision-making. The core principles include authority (impersonating trusted figures like executives, IT staff, or government officials), urgency (creating artificial time pressure that forces quick action), social proof (claiming others have already complied with the request), scarcity (offering limited-time access or threatening to revoke privileges), and reciprocity (providing something of value before making a request). These triggers exploit well-documented cognitive biases. The Verizon 2025 DBIR found that the median time from a phishing email landing to a user clicking is just 21 seconds — demonstrating how quickly psychological triggers override careful evaluation. Modern attackers amplify these principles with AI. Voice cloning adds authority by mimicking known colleagues. AI-generated messages create personalized urgency at scale. Deepfake video calls provide visual social proof. The combination of psychological expertise and AI tooling makes social engineering attacks in 2025 and 2026 significantly more effective than their predecessors.