As organizations integrate AI across mission-critical systems, the attack surface grows beyond traditional IT and cloud environments. ATLAS fills the gap by documenting AI-specific attack scenarios, real adversary behaviors in the wild and mitigation strategies tailored to AI-enabled environments.
What is the MITRE ATLAS framework?
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a living, publicly accessible knowledge base that catalogs adversary tactics and techniques targeting AI-enabled systems. Modeled after the widely adopted MITRE ATT&CK® framework, ATLAS is tailored to the unique threats, vulnerabilities, and risks posed by artificial intelligence technologies.
ATLAS is a curated resource built on:
- Real-world attack observations
- AI red team demonstrations
- Security research from government, industry, and academia
It captures how adversaries target AI and machine learning systems, including behaviors, techniques, and tools that are either specific to or adapted for AI contexts.
From weaponization to infrastructure abuse: a shift in attacker focus
The overwhelming majority of activity involves threat actors using generative AI to accelerate their campaigns through tasks such as:
- Writing phishing emails or social engineering scripts
- Researching vulnerabilities or techniques
- Troubleshooting malware or tool development
- Generating content for scams or fake identities But a more concerning pattern is gaining attention: attackers are using evolving tactics where the exploitation of cloud-based LLM resources—such as AWS Bedrock and Azure AI Foundry—has become a growing vector for profit and abuse.

Public LLMs can generate text or assist with scripting, but cloud-hosted models are deeply integrated into high-value enterprise workflows. These systems offer adversaries access to compute, sensitive data, and trusted execution environments.

Why attack cloud-hosted AI instead of free, open-source models?
Cloud platforms like AWS Bedrock and Azure AI Foundry are attractive targets because they:
- Expose multi-tenant infrastructure: A vulnerability in shared components can impact multiple customers.
- Provide access to confidential enterprise data: Especially when tied to RAG (retrieval-augmented generation) workflows, which enhance LLM responses by retrieving and injecting enterprise data from connected knowledge sources.
- Enable abuse of trust and identity integrations: Cloud identities and IAM roles can be leveraged for privilege escalation or lateral movement.
- Cost money to operate: Attackers can exploit this by hijacking compute resources (LLMjacking).
Abusing these platforms allows adversaries to operate with higher stealth and return on investment compared to using open-source or public APIs.
Why adversaries target Generative AI infrastructure
Attackers targeting cloud-based GenAI services are motivated by three primary objectives: financial gain, data exfiltration, and destructive intent. Their success depends heavily on how they gain initial access to enterprise infrastructure.
1. Financial Gain
Attackers may seek to hijack enterprise accounts or exploit misconfigured infrastructure to run unauthorized inference jobs—a tactic known as LLMjacking. They can also abuse cloud AI services for free computation or monetized model access.
2. Exfiltration
Sophisticated adversaries aim to extract proprietary models, training data, or sensitive enterprise documents accessed via RAG (retrieval-augmented generation) systems. Inference APIs can also be abused to leak data over time.
3. Destruction
Some actors seek to degrade system performance or availability by launching denial-of-service attacks, poisoning training pipelines, or corrupting model outputs.

While misuse of public LLMs enables some attacker activity, the strategic advantages of targeting enterprise GenAI infrastructure (data access, scalability, and trust exploitation) make it a more attractive and impactful vector.
How MITRE ATLAS helps you understand and map these threats
MITRE ATLAS provides a structured view of real-world tactics and techniques used against AI systems, which maps directly to risks seen in platforms like AWS Bedrock, Azure AI, and other managed GenAI services.
ATLAS covers a wide range of techniques beyond those related to cloud-hosted AI infrastructure, including AI model reconnaissance, poisoning open-source models or datasets, LLM plugin compromise, and many others.
Here are some examples of specific techniques a threat actor could leverage in the process of attacking cloud-hosted AI infrastructure:
Initial access
- Valid Accounts (AML.T0012): Adversaries often acquire legitimate credentials via phishing campaigns, credential stuffing, or supply chain breaches.
- Exploit Public-Facing Applications (AML.T0049): Poorly secured or exposed endpoints (e.g., RAG assistants or APIs) can be used to gain initial footholds into GenAI systems.

AI Model Access
- AI Model Inference API Access (AML.T0040): Gaining direct access to inference APIs to run unauthorized queries or workloads.
- AI-Enabled Product or Service (AML.T0047): Targeting enterprise software integrated with GenAI to manipulate output or extract internal data.
Execution
- LLM Prompt Injection (AML.T0051): Injecting malicious inputs that subvert guardrails or logic, especially in RAG or workflow-integrated systems.
Privilege Escalation/Defense Evasion
- LLM Jailbreak (AML.T0054): Bypassing model controls to unlock restricted functions or generate harmful content.

Discovery
- Discover AI Model Family (AML.T0014): Identifying model architecture or vendor characteristics to tailor attacks.
- Discover AI Artifacts (AML.T0007): Locating logs, prompt histories, or datasets that reveal system internals.

Collection & Exfiltration
- Data from Information Repositories (AML.T0036): Harvest structured/unstructured data retrieved through RAG or embedded in AI services.
- Exfiltration via AI Inference API (AML.T0024): Slowly extract data by abusing the inference layer.
- LLM Data Leakage (AML.T0057): Trigger unintentional data leakage through carefully crafted queries.
Impact
- Denial of AI Service (AML.T0029): Overload or disrupt AI endpoints to degrade availability.
- External Harms (AML.T0048): Cause financial, reputational, legal, or physical harm by abusing AI systems or manipulating AI output in critical applications.

These techniques illustrate how attackers exploit every layer of AI infrastructure: access, execution, data, and impact. MITRE ATLAS provides the mapping needed for SOC teams to prioritize detections, validate defenses, and red team effectively across enterprise AI environments.
How Vectra AI maps to MITRE ATLAS
Vectra AI maps its detection logic to MITRE ATLAS to help SOC teams identify:
- Identities suspiciously accessing GenAI platforms such as AWS Bedrock and Azure AI Foundry
- Attempts to evade defenses and impede investigations of GenAI abuse
- Anomalous usage of GenAI models consistent with cloud account compromise
In addition to surfacing these behaviors, Vectra’s AI prioritization agent amplifies analyst focus by raising the risk profiles of identities associated with suspicious access and enablement of GenAI models. Because Vectra AI delivers agentless, identity-driven detection across hybrid cloud and SaaS environments, it is uniquely positioned to detect threats that traditional tools might miss-especially in AI-integrated workflows.
Overview of our current visibility into the techniques and sub techniques defined in MITRE's ATLAS framework (as published on 17 Mar 2025).
The narrative is changing. Generative AI is no longer just a tool in the hands of attackers; it is now a target. And as enterprise adoption grows, so too will the sophistication of these attacks. By using frameworks like MITRE ATLAS and deploying solutions like the Vectra AI Platform, security teams can stay ahead of evolving threats and ensure AI delivers value without compromising integrity.