MITRE ATLAS explained: The complete guide to AI security threat intelligence

Key insights

  • MITRE ATLAS catalogs 15 tactics, 66 techniques, and 46 sub-techniques specifically targeting AI and machine learning systems as of October 2025.
  • The October 2025 framework update added 14 new agentic AI techniques through collaboration with Zenity Labs, addressing autonomous AI agent security risks.
  • ATLAS complements rather than competes with OWASP LLM Top 10 and NIST AI RMF — use all three for comprehensive coverage.
  • Approximately 70% of ATLAS mitigations map to existing security controls, making integration with current SOC workflows practical.
  • Free tools including ATLAS Navigator and Arsenal enable immediate threat modeling and red teaming capabilities.

Organizations deploying artificial intelligence face a new frontier of security threats that traditional frameworks were never designed to address. According to SecurityWeek's Cyber Insights 2025, AI-assisted cyber attacks surged 72% in 2025 alone. This escalation demands a structured approach to understanding and defending against adversarial threats to AI systems. Enter the MITRE ATLAS framework — the first comprehensive adversarial ML knowledge base designed specifically to catalog how attackers target machine learning and AI systems.

For security teams already familiar with MITRE ATT&CK, ATLAS (sometimes referenced as Atlas MITRE in search) provides a natural extension into AI security territory. This guide delivers everything security analysts, SOC leaders, and AI engineers need to operationalize ATLAS against adversarial AI attacks — from framework fundamentals to practical detection strategies.

What is MITRE ATLAS?

MITRE ATLAS is a globally accessible adversarial ML knowledge base that documents adversary tactics, techniques, and procedures (TTPs) specifically targeting artificial intelligence and machine learning systems. Often referred to as the MITRE ATLAS adversarial AI knowledge base, it provides security teams with a structured approach to understanding, detecting, and defending against AI-specific threats. Modeled after the widely adopted MITRE ATT&CK framework, the MITRE ATLAS framework serves as the definitive machine learning security framework for threat modeling. The acronym stands for Adversarial Threat Landscape for Artificial-Intelligence Systems.

As of October 2025, the framework contains 15 tactics, 66 techniques, 46 sub-techniques, 26 mitigations, and 33 real-world case studies according to the official MITRE ATLAS CHANGELOG. This represents significant growth from earlier versions, driven by the rapid evolution of AI threats.

Adversarial machine learning — the study of attacks on machine learning systems and defenses against them — encompasses four main attack categories as documented by NIST: evasion, poisoning, privacy, and abuse attacks. ATLAS organizes these attack patterns into a matrix structure that security practitioners can immediately put to use.

MITRE created ATLAS to address a critical gap in the security landscape. While ATT&CK effectively catalogs threats to traditional IT and OT infrastructure, it lacks coverage of attacks that exploit the unique characteristics of machine learning systems. ATLAS fills this void by providing the same rigorous, community-validated approach to AI threat intelligence.

The framework also connects to MITRE D3FEND, which provides defensive countermeasures that organizations can map against ATLAS techniques.

ATLAS vs MITRE ATT&CK: Key differences

Understanding the distinction between ATLAS and ATT&CK helps security teams determine when to apply each framework.

Table: Comparison of MITRE ATT&CK and MITRE ATLAS frameworks

Aspect MITRE ATT&CK MITRE ATLAS
Primary focus Traditional IT/OT adversary behaviors AI/ML-specific adversary behaviors
Tactic count 14 tactics (Enterprise) 15 tactics (14 inherited + 2 AI-specific)
Technique count 196+ techniques 66 techniques
Unique tactics None AI-specific ML Model Access, ML Attack Staging
Target systems Endpoints, networks, cloud ML models, training pipelines, LLMs
Case studies Groups and software profiles 33 AI-specific incident analyses
Best for Endpoint/network threat modeling AI system threat modeling

ATLAS inherits 13 tactics from ATT&CK — including Reconnaissance, Initial Access, Execution, and Exfiltration — but applies them specifically to AI contexts. The two AI-specific tactics unique to ATLAS are:

  • ML Model Access (AML.TA0004): Describes how adversaries gain access to target ML models through inference APIs or direct artifact access
  • ML Attack Staging (AML.TA0012): Covers how adversaries prepare attacks targeting ML models, including training data poisoning and backdoor insertion

Security teams should use both frameworks together for comprehensive coverage — ATT&CK for traditional infrastructure threats and ATLAS for AI-specific attack vectors.

How ATLAS works: Framework structure and the MITRE ATLAS matrix

The MITRE ATLAS official knowledge base organizes threat intelligence using the same matrix structure that made ATT&CK successful. Understanding this structure enables effective threat detection and AI threat modeling.

The MITRE ATLAS matrix (sometimes called the MITRE framework matrix for AI) displays tactics as columns and techniques as rows. Each cell represents a specific method adversaries use to achieve tactical goals against AI systems. This visual organization allows security teams to quickly identify coverage gaps and prioritize defenses.

The framework components work together:

  1. Tactics answer the "why" — the adversary's goal at each attack stage
  2. Techniques answer the "how" — specific methods to achieve tactical goals
  3. Sub-techniques provide granular detail on technique variations
  4. Mitigations describe defensive measures that counter specific techniques
  5. Case studies document real-world attacks mapped to ATLAS TTPs

ATLAS data is available in STIX 2.1 format, enabling machine-readable integration with security tools and platforms. This standardized format supports automated ingestion into threat intelligence platforms and SIEM systems.

The framework receives regular updates through community contributions and MITRE's ongoing research. The October 2025 update through Zenity Labs collaboration added 14 new agent-focused techniques, demonstrating the framework's active evolution.

Understanding tactics, techniques, and procedures (TTPs)

Tactics, techniques, and procedures (TTPs) form the core vocabulary of threat-informed defense. In ATLAS:

  • Tactics represent adversary goals at each phase of an attack against AI systems. The 15 ATLAS tactics span from initial reconnaissance through ultimate impact.
  • Techniques describe the specific actions adversaries take to achieve tactical goals. Each technique has a unique identifier in the format AML.TXXXX.
  • Sub-techniques break down techniques into more specific variations. For example, prompt injection (AML.T0051) includes sub-techniques for direct and indirect injection methods.
  • Procedures appear in case studies, showing exactly how real-world attackers implemented specific techniques.

This hierarchy enables progressively detailed threat modeling. Teams can start with tactic-level coverage analysis and drill down to specific techniques based on their AI system's exposure.

The 15 ATLAS tactics and key techniques

ATLAS organizes 66 techniques across 15 tactics that span the complete adversarial lifecycle. This comprehensive breakdown addresses a significant content gap identified in competitor analysis — no existing guide covers all tactics with detection-focused guidance.

Table: Complete list of 15 MITRE ATLAS tactics with key techniques

Tactic ID Tactic Name Key Techniques Detection Focus
AML.TA0001 Reconnaissance Discover ML Artifacts, Discover ML Model Ontology, Active Scanning Monitor for model architecture probing
AML.TA0002 Resource Development Acquire Public ML Artifacts, Develop Adversarial ML Attack Capabilities Track adversarial tooling emergence
AML.TA0003 Initial Access ML Supply Chain Compromise, Prompt Injection (AML.T0051) Audit supply chain, input validation
AML.TA0004 ML Model Access Inference API Access, ML Artifacts Access API access logging, artifact integrity
AML.TA0005 Execution User Execution, LLM Plugin Compromise Plugin security monitoring
AML.TA0006 Persistence Modify AI Agent Configuration Configuration change detection
AML.TA0007 Privilege Escalation Exploit through ML System ML system boundary monitoring
AML.TA0008 Defense Evasion Adversarial Perturbation, LLM Meta Prompt Extraction Model behavior anomaly detection
AML.TA0009 Credential Access Credentials from AI Agent Configuration Agent config access monitoring
AML.TA0010 Discovery Discover AI Agent Configuration Enumeration attempt detection
AML.TA0011 Collection Data from AI Services, RAG Database Retrieval Data access pattern analysis
AML.TA0012 ML Attack Staging Poison Training Data (AML.T0020), Backdoor ML Model Training data integrity monitoring
AML.TA0013 Exfiltration Exfiltration via ML Inference API, Exfiltration via AI Agent Tool Invocation API usage anomaly detection
AML.TA0014 Impact Denial of ML Service, Evade ML Model, Spamming ML System Service availability monitoring

Reconnaissance through Initial Access (AML.TA0001-AML.TA0003)

The attack lifecycle begins with reconnaissance, where adversaries gather information about target ML systems. Key techniques include:

  • Discover ML Artifacts: Adversaries search public repositories, documentation, and APIs to understand model architectures and training data
  • ML Supply Chain Compromise: Attackers target supply chain attacks by inserting malicious code or data into ML pipelines
  • Prompt Injection (AML.T0051): Adversaries craft malicious inputs to manipulate LLM behavior — this maps to OWASP LLM01

ML Model Access and Execution (AML.TA0004-AML.TA0005)

These AI-specific tactics describe how adversaries interact with and exploit ML models:

  • Inference API Access: Gaining access to model prediction interfaces enables reconnaissance and attack staging
  • LLM Plugin Compromise: Exploiting vulnerable plugins extends attacker capabilities within AI systems

Persistence through Defense Evasion (AML.TA0006-AML.TA0008)

Threat actors maintain access and avoid detection through:

  • Modify AI Agent Configuration (October 2025 addition): Attackers alter agent settings to maintain persistence
  • Adversarial Perturbation: Crafting inputs that cause models to misclassify while appearing normal to humans

Collection through Impact (AML.TA0009-AML.TA0014)

Later-stage tactics focus on achieving adversary objectives:

  • RAG Database Retrieval: Extracting sensitive information from retrieval-augmented generation systems
  • Poison Training Data (AML.T0020): Data poisoning corrupts training data to manipulate model behavior — a critical data exfiltration vector
  • Exfiltration via AI Agent Tool Invocation (October 2025 addition): Leveraging agent tool access to extract data

Understanding lateral movement patterns helps security teams track how attackers progress through these tactics.

ATLAS tools ecosystem

ATLAS provides free, practical tools that transform the framework from documentation into actionable security capabilities. This tools ecosystem addresses a major content gap — few competitors cover these resources comprehensively.

Table: MITRE ATLAS official tools ecosystem

Tool Purpose URL Key Features
ATLAS Navigator Matrix visualization and annotation atlas.mitre.org Custom layers, coverage mapping, export capabilities
Arsenal Automated adversary emulation github.com/mitre-atlas/arsenal CALDERA plugin, technique implementation, red team automation
AI Incident Sharing Community threat intelligence ai-incidents.mitre.org Anonymized incident reports, vulnerability database
AI Risk Database Incident and vulnerability repository ai-incidents.mitre.org Searchable incidents, CVE integration

ATLAS Navigator walkthrough

The ATLAS Navigator provides an interactive web interface for visualizing the framework matrix. Security teams use Navigator for:

  1. Coverage mapping: Create custom layers showing which techniques your security controls address
  2. Threat modeling: Highlight relevant techniques based on your AI system's architecture
  3. Gap analysis: Identify techniques without corresponding detection capabilities
  4. Reporting: Export visualizations for stakeholder communication

Navigator integrates with the ATT&CK Navigator, enabling unified views across both frameworks. Teams already using ATT&CK Navigator will find the ATLAS interface immediately familiar.

Arsenal for AI red teaming

In March 2023, Microsoft and MITRE announced collaboration on Arsenal — a CALDERA plugin enabling automated adversary emulation against AI systems. Arsenal implements ATLAS techniques without requiring deep machine learning expertise.

Key capabilities include:

  • Pre-built adversary profiles based on ATLAS tactics
  • Automated attack chain execution for purple team exercises
  • Results mapped directly to ATLAS technique IDs
  • Integration with existing CALDERA deployments

Arsenal supports threat hunting by validating detection coverage against realistic attack simulations. For incident response teams, Arsenal helps understand attacker capabilities and test response procedures.

AI Incident Sharing Initiative

The AI Incident Sharing Initiative enables organizations to share and learn from AI security incidents. This community-driven platform provides:

  • Anonymized incident reports with ATLAS technique mapping
  • Searchable database of AI vulnerabilities and attacks
  • Integration with CVE and CWE AI Working Groups
  • Trend analysis across reported incidents

This intelligence feeds directly into ATLAS updates, ensuring the framework reflects current threat patterns.

Framework comparison: ATLAS vs OWASP LLM Top 10 vs NIST AI RMF

Security teams often ask which AI security framework to adopt. The answer: use all three for complementary coverage. This comparison helps teams understand when to apply each framework, addressing a common PAA question.

Table: AI security framework comparison: ATLAS vs OWASP vs NIST AI RMF

Framework Focus Audience Best For
MITRE ATLAS Adversary TTPs for AI systems Security operations, threat hunters Threat modeling, detection development, red teaming
OWASP LLM Top 10 LLM application vulnerabilities Developers, AppSec engineers Secure development, code review, vulnerability assessment
NIST AI RMF AI risk governance Risk managers, compliance teams Organizational governance, regulatory compliance

According to Cloudsine's framework analysis, these frameworks serve different phases of the AI security lifecycle:

  • Development phase: OWASP LLM Top 10 guides secure coding practices
  • Operations phase: ATLAS informs threat modeling and detection strategies
  • Governance phase: NIST AI RMF structures risk management and compliance

Crosswalk table: Mapping across frameworks

Table: Framework crosswalk for common AI vulnerabilities

Vulnerability ATLAS Technique OWASP LLM NIST AI RMF Function
Prompt injection AML.T0051 LLM01 Map, Measure
Data poisoning AML.T0020 LLM03 Manage
Supply chain ML Supply Chain Compromise LLM05 Govern
Model theft Model Extraction LLM10 Manage

Understanding vulnerabilities across all three frameworks enables comprehensive coverage. Teams should map their AI assets to relevant techniques in each framework.

SOC integration and operationalization

Integrating ATLAS into security operations requires mapping techniques to detection capabilities and workflows. According to ThreatConnect's SOC integration guide, approximately 70% of ATLAS mitigations map to existing security controls. The remaining 30% require new AI-specific controls.

Steps for SOC integration:

  1. Inventory AI assets: Document all ML models, training pipelines, and AI-enabled applications
  2. Map techniques to assets: Identify which ATLAS techniques apply based on your AI architecture
  3. Assess current coverage: Use Navigator to visualize existing detection capabilities
  4. Prioritize gaps: Focus on high-impact techniques relevant to your environment
  5. Develop detection rules: Create SIEM rules and alerts for priority techniques
  6. Establish baselines: Define normal behavior for AI system telemetry
  7. Integrate with workflows: Add ATLAS context to alert triage and investigation procedures
  8. Review quarterly: Update threat models as ATLAS evolves

Detection rule mapping

Effective detection requires mapping ATLAS techniques to specific log sources and detection logic.

Table: Example detection mapping for priority ATLAS techniques

ATLAS Technique Log Source Detection Logic Priority
Prompt Injection (AML.T0051) Application logs, API gateway Unusual input patterns, injection signatures Critical
Data Poisoning (AML.T0020) Training pipeline logs Data distribution anomalies, provenance violations High
ML Inference API Exfiltration API access logs, cloud security logs High-volume queries, unusual access patterns High
Model Extraction Inference API logs Systematic queries probing model boundaries Medium

Network detection and response capabilities complement application-layer detection. User and entity behavior analytics (UEBA) helps identify anomalous access patterns to AI systems.

Metrics and coverage tracking

Track these metrics to measure ATLAS operationalization:

  • Technique coverage: Percentage of relevant techniques with detection rules
  • Detection latency: Time from attack execution to alert generation
  • False positive rate: Alert accuracy for AI-specific detections
  • Threat model currency: Days since last ATLAS-informed update

Quarterly threat model reviews ensure coverage keeps pace with framework updates and emerging threats.

Case studies and lessons learned

ATLAS includes 33 case studies documenting real-world attacks against AI systems. Analyzing these incidents provides actionable defensive insights that go beyond theoretical threat modeling.

iProov deepfake case study analysis

In November 2025, MITRE ATLAS published a case study documenting deepfake attacks against mobile KYC (Know Your Customer) liveness detection systems. According to Mobile ID World's coverage, this attack targeted banking, financial services, and cryptocurrency platforms.

Attack chain progression:

Reconnaissance -> Resource Development -> Initial Access -> Defense Evasion -> Impact

  1. Reconnaissance: Attackers gathered target identity information from social engineering via social media profiles
  2. Resource Development: Adversaries acquired face-swap AI tools (Faceswap, Deep Live Cam)
  3. Initial Access: OBS virtual camera injection bypassed physical camera requirements
  4. Defense Evasion: AI-generated deepfakes defeated liveness detection algorithms
  5. Impact: Successful fraudulent account creation and identity verification bypass

Defensive recommendations:

  • Implement multi-modal verification beyond facial recognition
  • Deploy device attestation to detect virtual camera injection
  • Monitor for signs of synthetic media in biometric captures
  • Establish enhanced liveness detection with depth sensing

This case study demonstrates how attackers combine social engineering with AI tools to defeat security controls, potentially leading to data breaches.

Cylance endpoint product bypass

The HiddenLayer analysis of ATLAS case study AML.CS0003 documents how researchers bypassed an ML-based endpoint security product:

  • Attackers used adversarial perturbation techniques to craft malware that evaded detection
  • The attack demonstrated model evasion without knowledge of the underlying model architecture
  • Defensive lessons include model diversity and input validation for ML-based security tools

Detecting and preventing AI threats

AI security threats require specialized detection approaches that go beyond traditional security controls. With a 72% surge in AI-assisted attacks in 2025, organizations need proactive defense strategies.

Defense checklist for AI security:

  • [ ] Implement input validation and sanitization for all LLM interactions
  • [ ] Deploy prompt injection detection at the application layer
  • [ ] Establish training data provenance and integrity monitoring
  • [ ] Monitor inference API access patterns for anomalies
  • [ ] Audit AI agent configurations and permissions regularly
  • [ ] Integrate AI-specific alerts with existing SOC workflows
  • [ ] Conduct regular AI red team exercises using Arsenal
  • [ ] Subscribe to AI threat intelligence feeds

Organizations should align AI security investments with both phishing prevention (AI-generated phishing is rising rapidly) and ransomware defense (AI enables more sophisticated attacks).

LLM-specific threat deep dive

Large language models face unique attack vectors that traditional security cannot address. ATLAS catalogs these threats systematically.

Table: LLM threat types with ATLAS mapping and detection methods

Threat Type ATLAS Technique Detection Method Mitigation
Direct prompt injection AML.T0051.001 Input pattern analysis Input sanitization, instruction hierarchy
Indirect prompt injection AML.T0051.002 Content source validation Data source controls, sandboxing
LLM jailbreaking AML.T0051 Output behavior monitoring Guardrails, output filtering
Context window manipulation AML.T0051 Context length monitoring Context limits, summarization
RAG poisoning AML.T0060 Document integrity checks Source verification, access controls

Recent CVEs demonstrate these threats in practice:

  • CVE-2025-32711 (EchoLeak): According to Hack The Box analysis, this Microsoft Copilot vulnerability enabled zero-click data exfiltration through prompt injection combined with prompt reflection
  • CVE-2025-54135/54136 (CurXecute): Per BleepingComputer reporting, the Cursor IDE's MCP implementation allowed remote code execution via prompt injection

Identity threat detection and response capabilities help detect credential theft attempts through LLM exploitation.

Agentic AI security considerations

The October 2025 ATLAS update specifically addresses autonomous AI agents — systems that can take actions, access tools, and persist context across sessions. New techniques include:

  • AML.T0058 AI Agent Context Poisoning: Injecting malicious content into agent memory or thread context
  • AML.T0059 Activation Triggers: Embedding triggers that activate under specific conditions
  • AML.T0060 Data from AI Services: Extracting information through RAG database retrieval
  • AML.T0061 AI Agent Tools: Exploiting agent tool access for malicious purposes
  • AML.T0062 Exfiltration via AI Agent Tool Invocation: Using legitimate tool calls to extract data

Security principles for AI agents:

  1. Apply least privilege to all agent tool permissions
  2. Implement human-in-the-loop for sensitive operations
  3. Monitor agent configuration changes continuously
  4. Validate MCP server configurations and connections
  5. Establish agent behavior baselines for anomaly detection

According to CISA's December 2025 AI/OT guidance, organizations should embed oversight and failsafes for all AI systems operating in critical environments.

Modern approaches to AI security

The AI security landscape evolves rapidly, with regulatory pressure and industry collaboration driving framework adoption. Organizations must prepare for both emerging threats and compliance requirements.

The MITRE Secure AI Program, supported by 16 member organizations including Microsoft, CrowdStrike, and JPMorgan Chase, focuses on expanding ATLAS with real-world observations and expediting AI incident sharing.

Regulatory developments:

  • EU AI Act: GPAI (General Purpose AI) obligations became active in August 2025, requiring adversarial testing for systemic-risk AI systems and cybersecurity protection against unauthorized access
  • CISA guidance: The December 2025 multi-agency publication addresses AI security in operational technology environments

AI security threats 2025 trends show continued acceleration, with 87% of organizations reporting AI-powered cyberattack exposure according to industry research.

How Vectra AI approaches AI security threats

Vectra AI's Attack Signal Intelligence methodology applies behavior-based detection principles that align with ATLAS framework objectives. By focusing on attacker behaviors rather than static signatures, organizations can detect the techniques cataloged in ATLAS — from prompt injection attempts to data exfiltration via inference APIs — across hybrid cloud environments.

This approach enables security teams to identify and prioritize real AI-related threats while reducing alert noise. Network detection and response combined with identity threat detection provides visibility across the attack surface that AI threats now target.

More cybersecurity fundamentals

FAQs

What is MITRE ATLAS?

How does MITRE ATLAS differ from MITRE ATT&CK?

How many tactics and techniques are in MITRE ATLAS?

What is prompt injection in MITRE ATLAS?

How do I use MITRE ATLAS for threat modeling?

What tools does MITRE ATLAS provide?

How does MITRE ATLAS compare to OWASP LLM Top 10?