Cloud detection and response (CDR) is the runtime discipline that finds and stops active threats inside cloud environments — the layer between posture management and incident response where most modern breaches actually unfold. It is also a category that analysts and vendors are still arguing about. A May 2024 Forrester analysis called CDR a feature, not a market. Two years later, Forrester's Q1 2026 Wave for Cloud-Native Application Protection Solutions evaluated 14 vendors with material CDR coverage. Whatever you call it, the capability is now central to how regulated enterprises detect identity abuse, control-plane attacks, and ephemeral-workload compromise — the breaches that traditional endpoint detection and response and SIEM tooling were never designed to see.
This guide explains what CDR is, how it works, how it differs from EDR, NDR, XDR, CSPM, CWPP, CNAPP, and SIEM, and how it maps to NIS2, NIST SP 800-61 Revision 3, and UK GDPR. We use real cloud breaches — Capital One, Snowflake, UNC6426, the European Commission — to show what CDR signals look like in practice, not in marketing.
Cloud detection and response (CDR) is a continuous-monitoring discipline that detects, investigates, and responds to active threats across the cloud control, data, and management planes. It ingests cloud-native telemetry — CloudTrail, Azure Activity Logs, GCP Audit Logs, runtime process events — and applies behavioral analytics to surface attacks that traditional EDR, SIEM, and posture tools miss.
Cloud needs a new approach because the assumptions that built endpoint security no longer hold. Per Sysdig's 2025 Cloud-Native Security and Usage Report, 60% of containers live less than a minute — there is often no persistent host to put an agent on, and forensic evidence vanishes faster than batch-mode log aggregators can preserve it. Identity has displaced the network as the dominant initial-access vector: roughly 83% of cloud breaches in 2026 begin with identity compromise, and industry threat intelligence research reported a 266% year-over-year surge in cloud-conscious intrusions for 2026. The financial picture matches the operational one: Ponemon Institute's Cost of a Data Breach lineage has consistently shown multi-cloud breaches running roughly $1M higher than on-premises incidents — an average near $5.05M in the most recent editions.
Cloud security monitoring — the broader practice of watching cloud activity for risk and policy violations — overlaps with CDR but is not the same thing. Monitoring describes the telemetry-collection function; CDR is the active-detection-and-response discipline built on top of that telemetry. Adjacent terms you will see in vendor and analyst material include cloud-native detection and response (CNDR) and cloud threat detection and response (CTDR). They mean substantively the same thing.
The category itself is contested. Forrester's May 2024 position was that "cloud detection and response tools do not exist" as a discrete market. Two years later, Forrester's Q1 2026 Wave for Cloud-Native Application Protection Solutions evaluates 14 vendors with material CDR capability — a meaningful drift in framing. Vendor pillar pages from Wiz, Sysdig, and Tenable describe CDR as an emerging discipline within the cloud security family. We treat the underlying capability — runtime threat detection across the three cloud planes — as real and necessary, regardless of how the buying line is drawn.
CDR emerged as a vendor-coined category between roughly 2022 and 2024, as cloud-native breaches outpaced what posture tools (CSPM) and workload-hardening tools (CWPP) could address on their own. Gartner positions CDR within the cloud-native application protection platform (CNAPP) family — a positioning broadly consistent with how most enterprise buyers now structure cloud-security stacks.
The clearest analogy comes from Sysdig: think of CDR as an always-on security camera for the cloud. Endpoint tools tell you what happened on a single machine. Posture tools tell you what your configuration looked like at a point in time. CDR tells you what is happening right now across every plane, every provider, and every workload — and stitches those signals into a single attack story. Operationally, the discipline runs through six steps:
The third and fourth steps are where CDR earns its keep. Signature-only detection cannot keep up with cloud attacker velocity, and raw alert volume from CloudTrail alone is unmanageable. Behavioral baselines — what does this principal normally do? what does this workload normally talk to? — turn that volume into stitched, high-fidelity incident response signals.
The agent-versus-agentless debate is real but increasingly false-binary. Agentless CDR — snapshot-based and API-driven — gives breadth and fast deployment with zero workload friction. It is excellent for control-plane and management-plane visibility, multi-account coverage, and onboarding tempo. Agent-based CDR (typically eBPF or sidecar) gives runtime depth: process-level events, syscall traces, network calls inside containers, and the kind of data-plane forensics that ephemeral workloads otherwise erase. Most modern programs combine both — agentless for breadth, agent-based for depth on critical workloads and Kubernetes security clusters where runtime forensics matters.
Behavior-based detection is now mainstream, not aspirational. Per Sysdig's 2026 reporting, more than 70% of teams use behavior-based detection covering 91% of environments. Auto-termination of suspicious processes rose roughly 140% year-over-year. Only about 2.8% of managed cloud identities are human — the rest are service principals, machine identities, and ephemeral workload tokens, which is exactly why identity-context enrichment is non-negotiable. AI-specific package adoption grew about 25x year-over-year, expanding the supply-chain attack surface that CDR has to watch.
The defender response is "machine-speed parity." Dark Reading has popularized the 5/5/5 benchmark — 5 seconds to detect, 5 minutes to triage, 5 minutes to respond — as the operational target for cloud-conscious adversary scenarios where breakout time is measured in minutes, not hours.
Most CDR confusion clears up the moment you separate the cloud surface into its three planes. This is the architectural anchor every security architect should be able to draw on a whiteboard in 30 seconds.
CAPABILITY_NAMED_IAM outside change windows, anomalous S3 policy changes.A useful mental picture: the control plane is the cloud's switchboard, the data plane is the workshop where work actually happens, and the management plane is the front office where identity and governance decisions are made. Attackers need to traverse at least two planes to do meaningful damage. Defenders need visibility into all three to catch them.
The table below maps each plane to its typical telemetry sources, an example detection, and the MITRE ATT&CK technique it surfaces. This mapping is the difference between alert noise and a usable detection backlog.
Table 1: Mapping cloud telemetry sources to the three CDR detection planes.
The 2026 numbers explain why this map matters now. Cloud-attack breakout time has compressed to roughly 29 minutes per Google Cloud's Threat Horizons H1 2026 report. The UNC6426 cluster, which we will return to in the case studies, achieved full AWS administrative access in under 72 hours from a single compromised npm package — the entire attack chain crossed all three planes (control-plane stack creation, management-plane OIDC trust abuse, data-plane S3 enumeration). For more on the control-plane attack surface specifically, see cloud control plane protection.
The most common architectural question in CDR evaluations is "what does this replace?" The honest answer: nothing. CDR's distinctive scope is runtime threat detection across the three cloud planes — a surface that endpoint, network, posture, and workload tools each see only partially. The categories are complementary, and most modern programs deploy several together.
Table 2: How CDR's scope compares to adjacent detection-and-response categories.
This deserves a direct answer. Forrester's May 2024 position was that CDR is not a discrete market — that the capability lives inside CNAPP, SIEM, and adjacent platforms and does not warrant a separate buying line. The argument was reasonable in 2024 and has weakened since. Forrester's Q1 2026 Wave for Cloud-Native Application Protection Solutions evaluated 14 vendors with material CDR coverage, which complicates a strict "feature, not category" reading. Hyperscaler consolidation reinforces the drift: Google completed its $32B acquisition of Wiz on March 11, 2026, per TechCrunch's coverage of the close, bringing material CDR capability inside a hyperscaler portfolio.
Our reading: the underlying capability is real, necessary, and not adequately covered by EDR, NDR, CSPM, CWPP, or SIEM alone. Whether you buy it as a discrete product or as a leg of CNAPP depends on stack composition. Greenfield programs typically consolidate into CNAPP. Mature programs with strong existing SIEM and EDR investments often run CDR as a discrete layer that feeds the rest of the stack.
Abstract category arguments are easier to settle with concrete attacks. The four breaches below show what CDR signals look like across the three planes, and what their absence cost the affected organizations.
Case 1 — Capital One (July 2019, retrospective). Misconfigured AWS WAF combined with server-side request forgery (SSRF) let an attacker abuse an EC2 instance metadata IAM role to enumerate and exfiltrate roughly 106M U.S. and Canadian credit-card applicant records from S3. The attack timeline ran for months before discovery. CDR signal: anomalous data-plane and control-plane behavior — a single IAM principal pulling unusual S3 read volumes, originating from an EC2 instance whose normal behavior never included bulk S3 enumeration. This is exactly the cross-plane signal that behavioral analytics on CloudTrail, paired with AWS threat detection, is designed to surface.
Case 2 — Snowflake (2024). Credential reuse (some traceable to infostealer logs from 2020) plus missing customer-side MFA enforcement led to roughly 165 customer organizations being affected. The Cloud Security Alliance's 2024 Snowflake retrospective is the authoritative public analysis. CDR signal: anomalous-geo logins, elevated query volumes, and the absence of MFA on high-privilege SaaS auth surfaces — all detectable via management-plane identity telemetry. The lesson is that credential theft outcomes are a SaaS-tier CDR concern, not just an endpoint one.
Case 3 — UNC6426 (Q1 2026). This is the cleanest three-plane case study in the public record. A compromised npm package (QUIETVAULT) — a textbook supply chain attack — stole GitHub tokens. An overly broad GitHub-to-AWS OIDC trust policy let those tokens assume an AWS IAM role. CloudFormation was used to create an IAM stack with CAPABILITY_NAMED_IAM, which granted the attacker AWS administrative access in under 72 hours. The attacker then enumerated S3, terminated EC2 and RDS instances, and decrypted application keys. The chain is documented in The Hacker News' coverage of the UNC6426 npm supply-chain attack, the Cloud Security Alliance's OIDC trust-chain abuse briefing, and Google Cloud's Threat Horizons H1 2026 report. CDR signal: STS token issuance from a non-CI principal combined with CloudFormation CAPABILITY_NAMED_IAM stack creation outside the change window — a stitched control-plane and management-plane storyline that no single tool category catches alone.
Case 4 — European Commission Europa.eu (March 2026). A Trivy supply-chain compromise produced a five-day adversary dwell window during which roughly 91.7 GB was exfiltrated from EU-hosted AWS infrastructure. CERT-EU's blog on the European Commission cloud breach, BleepingComputer's coverage, and Help Net Security's reporting document the timeline. CDR signal: control-plane API anomalies sustained over a five-day window — the kind of pattern that posture-only tools cannot see and that batch-mode SIEM correlation typically misses without cloud-native context.
Table 3: Incident timeline and CDR signals across recent cloud breaches.
Ephemeral workloads break the assumptions of traditional digital forensics. When 60% of containers live less than a minute, post-hoc evidence collection is often impossible. The operational answer is real-time capture and immutable preservation:
The seven defensive practices below are aggregated from cross-vendor pillar guidance and tied to specific MITRE ATT&CK techniques where applicable. Frame them as defensive controls, not vendor checklists.
T1078.004, T1552). The 83% identity-origin breach figure is the whole reason identity threat detection and response and identity-based attack detection and containment are now adjacent to CDR.Table 4: Seven defensive practices that make cloud detection and response operationally effective.
For a deeper treatment of the underlying detection discipline, see threat detection and the MITRE ATT&CK Cloud Matrix. For a complementary use-case framing, TechTarget's CDR use-cases analysis is a useful neutral reference. Industry frameworks worth aligning against include the OWASP Cloud-Native Application Security Top 10 and the ENISA Cloud Security Guide.
CDR is now a compliance capability as much as a security one. The 24-hour and 72-hour clocks in current EU and UK regulation are not satisfiable with batch-mode log review and manual triage.
Table 5: How CDR capabilities map to major cloud incident-response regulations.
Three forces are reshaping CDR over the next 12 to 24 months. First, AI-first cloud defense is moving from concept to product — agentic remediation that detects an anomaly and contains it without human intervention is now visible across the leading CNAPP entrants in the Q1 2026 Forrester Wave. Second, hyperscaler consolidation accelerated meaningfully with the close of Google's $32B Wiz acquisition on March 11, 2026; independent CDR vendors will need to differentiate on signal quality and integration neutrality across multi-cloud. Third, defenders are shifting toward machine-speed parity with AI-assisted attackers — Sysdig's 2026 reporting shows behavior-based detection and auto-termination as standard practice, not differentiation.
For sub-5-FTE security teams, the implication is that some form of managed detection and response is increasingly the right answer for cloud — the operational tempo required to meet 5/5/5 and 24-hour-NIS2 clocks is hard to sustain in-house at small team sizes. The category outlook is consolidation: CDR as a discrete buying line will continue for organizations with mature CNAPP stacks; it will collapse into platform CNAPP for organizations starting greenfield. Either way, the runtime-detection capability is non-optional. Industry positioning summaries — including Medium's overview of Gartner CDR positioning and vendor framework references such as Skyhawk's CDR best-practices framework — offer additional context.
Vectra AI's approach to cloud detection and response reflects an "assume compromise" philosophy applied to the cloud: continuous observability across the three planes, AI-driven Attack Signal Intelligence™ to cut noise and reveal stitched attack storylines, and informed action that contains active threats before lateral movement takes hold. The aim is not more alerts — it is the right signal at machine speed. Independent IDC analysis of the Vectra AI platform found greater than 90% MITRE ATT&CK technique coverage and 391% ROI with a 6-month payback. For AWS-specific runtime coverage, see Vectra AI CDR for AWS.
Cloud detection and response is the runtime layer that turns cloud telemetry into stitched attack stories — not another product category competing with EDR, NDR, or SIEM, but the discipline that finally makes the three cloud planes legible to security operations. The shift from posture-only and endpoint-only thinking is no longer optional. Identity is the dominant initial-access vector. Workloads are ephemeral. Breakout time is measured in minutes. Regulators are imposing 24-hour clocks. The organizations that will hold up under those conditions are the ones treating runtime cloud detection as a first-class capability, however they choose to package it.
If you are building or refreshing a cloud-detection program, start with the three-plane mental model, map your existing telemetry against it, and identify the planes where you have detection rather than just logging. Pair CDR with behavioral analytics, feed it into your existing SIEM and extended detection and response workflows, and align the detection coverage to the MITRE ATT&CK Cloud Matrix so audit conversations have concrete evidence to work from. To go deeper on adjacent disciplines, see identity threat detection and response, AWS threat detection, and Kubernetes security in the related topics below. For a methodology view, the Vectra AI platform page (linked above) describes how Attack Signal Intelligence™ approaches the same problem.
A managed CDR service makes sense when the security team is small (fewer than five FTEs), runs 24/7 cloud workloads, lacks dedicated cloud-detection skills, or needs to meet 24-hour incident-reporting timelines under NIS2 or UK GDPR. The right candidates trade some signal customization for around-the-clock coverage and faster mean time to respond. The economic case usually pivots on three things: the cost of a 24/7 in-house cloud SOC at the current talent market, the regulatory penalty exposure under NIS2 (up to €10M or 2% of global turnover for essential entities), and the operational tempo required to hit the 5/5/5 benchmark. For teams already at capacity on EDR and SIEM, adding cloud-conscious detection in-house often takes 9 to 12 months; managed services compress that to weeks. The trade-off is signal customization — managed providers run their own detection logic, which is usually a feature for under-resourced teams and a constraint for teams with mature in-house detection engineering.
Evaluate against seven criteria: coverage across all three cloud planes (control, data, management); multi-cloud and SaaS reach (AWS, Azure, GCP, Kubernetes, and the SaaS surfaces where identity-origin breaches start); identity-context enrichment (the 83% identity-origin figure is the entire reason this matters); behavioral analytics versus signature-only; integration with existing SIEM, SOAR, EDR, and NDR; ephemeral-workload forensics support including eBPF runtime depth; and automation depth with human-in-the-loop guardrails for high-risk actions. Pressure-test the criteria with a real attack chain — the UNC6426 OIDC-to-CloudFormation chain is a useful evaluation scenario because it traverses all three planes and surfaces the gaps between posture-only tools and runtime-detection-and-response tools. Insist on detection latency benchmarks expressed against the 5/5/5 standard, not vendor-defined SLAs.
CDR is designed to detect cloud-account abuse (T1078.004), credential theft (T1552), lateral movement across cloud (T1021), misconfiguration exploitation, container escape, cryptojacking, OIDC trust-policy abuse, anomalous CloudFormation IAM-stack creation with CAPABILITY_NAMED_IAM, anomalous bulk data export from S3 or Redshift, ransomware staging in cloud workloads, and identity-driven SaaS account takeover. The Capital One, Snowflake, UNC6426, and European Commission cases above each illustrate at least three of these categories. In aggregate, the MITRE ATT&CK Cloud Matrix is the canonical reference for the threat surface CDR is built to cover.
Both readings are defensible. Gartner positions CDR within the CNAPP family — runtime detection alongside CSPM (posture) and CWPP (workload hardening). Forrester's May 2024 analysis treated CDR as a feature set rather than a discrete category. In practice, CDR is the runtime-detection-and-response leg of CNAPP — complementary to CSPM and CWPP, not duplicative. Whether you buy them as one platform or separately depends on stack maturity. Greenfield programs typically consolidate into CNAPP for procurement and operational simplicity. Mature programs with strong existing SIEM, EDR, and detection-engineering investments often run CDR as a discrete layer that feeds the rest of the stack, because the integration neutrality is more valuable than the platform consolidation.
Cloud workload protection (CWP, often referred to as CWPP) hardens individual workloads — vulnerability scanning, configuration enforcement, runtime protection at the host or container level. CDR detects and responds to active threats across the broader cloud surface, including the control and management planes that CWPP does not see. CWPP asks "is this workload safely configured and patched?" CDR asks "is something actively bad happening across my cloud right now?" Most modern programs deploy both. CWPP is an upstream hygiene control; CDR is a downstream detection-and-response capability. They show up together inside CNAPP precisely because each addresses a different layer of the cloud-security problem.
The capability is real and necessary — runtime threat detection across the three cloud planes is not adequately covered by EDR, NDR, CSPM, CWPP, or SIEM alone. Whether it remains a discrete buying line depends on stack composition. With Forrester evaluating 14 vendors in its Q1 2026 CNAPP Wave and major hyperscaler M&A — Google's $32B Wiz close in March 2026 — the category boundary is consolidating into CNAPP, but the underlying capability is non-optional. The right framing for an architecture review is "do we have runtime detection across our three cloud planes?" not "do we have a product called CDR?"
Capture process events and network telemetry in real time — eBPF-based agents are the current best option for runtime depth on critical workloads. Extend cloud-provider audit-log retention beyond default 30-90-day windows, particularly for control-plane and IAM events, because long-dwell intrusions can outrun defaults. Preserve cloud-snapshot evidence at the time of detection (EBS, Azure managed-disk, GCP persistent-disk snapshots) and tag those snapshots as forensic artifacts so automated cleanup does not destroy them. Maintain a forensic chain of custody using cloud-provider-immutable storage primitives — S3 Object Lock in compliance mode, Azure Immutable Blob storage. Treat ephemeral workloads as evidence-perishable by default: if you have not captured the data in real time, it is usually gone.