Securing Cloud AI Deployments: Insights from MITRE ATLAS and the need for AI driven Defense

June 18, 2025
Zack Abzug
Data Science Manager
Securing Cloud AI Deployments: Insights from MITRE ATLAS and the need for AI driven Defense

As organizations integrate AI across mission-critical systems, the attack surface grows beyond traditional IT and cloud environments. ATLAS fills the gap by documenting AI-specific attack scenarios, real adversary behaviors in the wild and mitigation strategies tailored to AI-enabled environments.

What is the MITRE ATLAS framework?

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a living, publicly accessible knowledge base that catalogs adversary tactics and techniques targeting AI-enabled systems. Modeled after the widely adopted MITRE ATT&CK® framework, ATLAS is tailored to the unique threats, vulnerabilities, and risks posed by artificial intelligence technologies.

ATLAS is a curated resource built on:

  • Real-world attack observations
  • AI red team demonstrations
  • Security research from government, industry, and academia

It captures how adversaries target AI and machine learning systems, including behaviors, techniques, and tools that are either specific to or adapted for AI contexts.

From weaponization to infrastructure abuse: a shift in attacker focus

The overwhelming majority of activity involves threat actors using generative AI to accelerate their campaigns through tasks such as:

  • Writing phishing emails or social engineering scripts
  • Researching vulnerabilities or techniques
  • Troubleshooting malware or tool development
  • Generating content for scams or fake identities But a more concerning pattern is gaining attention: attackers are using evolving tactics where the exploitation of cloud-based LLM resources—such as AWS Bedrock and Azure AI Foundry—has become a growing vector for profit and abuse.
AWS Bedrock
Figure: Amazon Bedrock is a service that helps users build and scale generative AI applications. It's a fully managed service that supports deployment of foundational models from Anthropic, Meta, and more.

Public LLMs can generate text or assist with scripting, but cloud-hosted models are deeply integrated into high-value enterprise workflows. These systems offer adversaries access to compute, sensitive data, and trusted execution environments.

Meme illustrating the fact attackers prefer abusing cloud GenAI rather than using GenAI for generic use

Why attack cloud-hosted AI instead of free, open-source models?

Cloud platforms like AWS Bedrock and Azure AI Foundry are attractive targets because they:

  • Expose multi-tenant infrastructure: A vulnerability in shared components can impact multiple customers.
  • Provide access to confidential enterprise data: Especially when tied to RAG (retrieval-augmented generation) workflows, which enhance LLM responses by retrieving and injecting enterprise data from connected knowledge sources.
  • Enable abuse of trust and identity integrations: Cloud identities and IAM roles can be leveraged for privilege escalation or lateral movement.
  • Cost money to operate: Attackers can exploit this by hijacking compute resources (LLMjacking).

Abusing these platforms allows adversaries to operate with higher stealth and return on investment compared to using open-source or public APIs.

Why adversaries target Generative AI infrastructure

Attackers targeting cloud-based GenAI services are motivated by three primary objectives: financial gain, data exfiltration, and destructive intent. Their success depends heavily on how they gain initial access to enterprise infrastructure.

1. Financial Gain

Attackers may seek to hijack enterprise accounts or exploit misconfigured infrastructure to run unauthorized inference jobs—a tactic known as LLMjacking. They can also abuse cloud AI services for free computation or monetized model access.

2. Exfiltration

Sophisticated adversaries aim to extract proprietary models, training data, or sensitive enterprise documents accessed via RAG (retrieval-augmented generation) systems. Inference APIs can also be abused to leak data over time.

3. Destruction

Some actors seek to degrade system performance or availability by launching denial-of-service attacks, poisoning training pipelines, or corrupting model outputs.

While misuse of public LLMs enables some attacker activity, the strategic advantages of targeting enterprise GenAI infrastructure (data access, scalability, and trust exploitation) make it a more attractive and impactful vector.

How MITRE ATLAS helps you understand and map these threats

MITRE ATLAS provides a structured view of real-world tactics and techniques used against AI systems, which maps directly to risks seen in platforms like AWS Bedrock, Azure AI, and other managed GenAI services.

ATLAS covers a wide range of techniques beyond those related to cloud-hosted AI infrastructure, including AI model reconnaissance, poisoning open-source models or datasets, LLM plugin compromise, and many others.

Here are some examples of specific techniques a threat actor could leverage in the process of attacking cloud-hosted AI infrastructure:

Initial access

Dark Web Ecosystem for LLMjacking — Compromised accounts and AWS IAM keys are sold through underground services, allowing consumers to run unpaid inference workloads on cloud-hosted LLMs.
Figure: Dark Web Ecosystem for LLMjacking — Compromised accounts and AWS IAM keys are sold through underground services, allowing consumers to run unpaid inference workloads on cloud-hosted LLMs.

AI Model Access

Execution

Privilege Escalation/Defense Evasion

After recon confirms enabled LLM models, they disable prompt logging to hide their tracks. By turning off logging, they stop the system from recording their illicit prompts—making it tough to spot what they’re doing with the hijacked models
Figure: After recon confirms enabled LLM models, they disable prompt logging to hide their tracks. By turning off logging, they stop the system from recording their illicit prompts—making it tough to spot what they’re doing with the hijacked models

Discovery

Attack Flow for Unauthorized Model Activation — Once inside a cloud environment using compromised IAM keys, adversaries perform reconnaissance to discover foundational models, enable them, and initiate inference—incurring cost and risk for the victim.
Figure: Attack Flow for Unauthorized Model Activation — Once inside a cloud environment using compromised IAM keys, adversaries perform reconnaissance to discover foundational models, enable them, and initiate inference—incurring cost and risk for the victim.

Collection & Exfiltration

Impact

attacker exploiting stolen IAM credential
Figure: Here an attacker again starts by exploiting stolen IAM credential, followed by recon. Next the attacker can test out if guardrails are in place for the enabled models. If attacker has enough permissions, they can even tamper with guardrails, and make the model more exploitable, especially custom ones.

These techniques illustrate how attackers exploit every layer of AI infrastructure: access, execution, data, and impact. MITRE ATLAS provides the mapping needed for SOC teams to prioritize detections, validate defenses, and red team effectively across enterprise AI environments.

How Vectra AI maps to MITRE ATLAS

Vectra AI maps its detection logic to MITRE ATLAS to help SOC teams identify:

  • Identities suspiciously accessing GenAI platforms such as AWS Bedrock and Azure AI Foundry
  • Attempts to evade defenses and impede investigations of GenAI abuse
  • Anomalous usage of GenAI models consistent with cloud account compromise

In addition to surfacing these behaviors, Vectra’s AI prioritization agent amplifies analyst focus by raising the risk profiles of identities associated with suspicious access and enablement of GenAI models. Because Vectra AI delivers agentless, identity-driven detection across hybrid cloud and SaaS environments, it is uniquely positioned to detect threats that traditional tools might miss-especially in AI-integrated workflows.

AML.TA0004
Initial Access
AML.TA0000
AI Model Access
AML.TA0005
Execution
AML.TA0012
Privilege Escalation
AML.TA0007
Defense Evasion
AML.TA0008
Discovery
AML.TA0009
Collection
AML.TA0010
Exfiltration
AML.TA0011
Impact
Valid Accounts AI Model Interference API Access LLM Prompt Injection LLM Jailbreak LLM Jailbreak Discover AI Model Family Data from Information Repositories Exfiltration via AI Inference API Denial of AI Service
Exploit Public-Facing Application AI-Enabled Product or Service Discover AI Artifacts LLM Data Leakage External Harms
Phishing

Overview of our current visibility into the techniques and sub techniques defined in MITRE's ATLAS framework (as published on 17 Mar 2025).

The narrative is changing. Generative AI is no longer just a tool in the hands of attackers; it is now a target. And as enterprise adoption grows, so too will the sophistication of these attacks. By using frameworks like MITRE ATLAS and deploying solutions like the Vectra AI Platform, security teams can stay ahead of evolving threats and ensure AI delivers value without compromising integrity.

FAQs