AI systems—especially those powered by large language models (LLMs)—are no longer just helping employees be more productive. They’re also creating new risks that traditional security tools weren’t designed to detect.
These threats don’t come in the form of malware or phishing attachments. They happen through AI logic, identity misuse, and hard-to-see behaviors that unfold inside cloud services and trusted workflows.
That’s why security teams are turning to frameworks like MITRE ATLAS, the AI Risk Repository, and the OWASP Top 10 for LLM Applications. These resources help organizations understand how attackers are targeting AI and where security blind spots still exist.
The AI attack surface is exploding—and most SOCs can’t see it
AI is now part of day-to-day business operations. Tools like copilots, intelligent assistants, and AI-powered search systems are being used to improve customer support, automate analysis, and streamline development.
But this shift has expanded the attack surface.
In our recent blog on Securing Cloud AI Deployments, we outlined how attackers are already abusing platforms like AWS Bedrock and Azure AI. Techniques such as hijacking compute resources, injecting malicious prompts, and abusing cloud identities are not theoretical—they are already happening. Many of these threats are documented in MITRE ATLAS, which tracks real-world adversary behavior against AI systems.
AI is no longer just a tool being misused. It is now the target itself.
Two additional frameworks add critical context:
- The AI Risk Repository, led by MIT, catalogs more than 770 risks related to how AI is built, deployed, and used. Over 65% of these threats happen after deployment, where visibility and control are often weakest.
- The OWASP Top 10 for LLM Applications outlines the top vulnerabilities specific to language models, including data leakage, prompt manipulation, and system overreach.
These frameworks share one message: today’s threats are unfolding inside trusted systems and workflows, not outside the perimeter.
Traditional tools miss them because they aren’t designed to look in the right places.

Three frameworks, one warning: AI creates a new type of risk
Understanding how AI changes your threat model means learning from three distinct but complementary perspectives:
- MITRE ATLAS maps the real techniques used by attackers, including abusing AI inference APIs and bypassing model restrictions.
- The AI Risk Repository highlights where things can go wrong across the entire AI lifecycle, from development to deployment.
- The OWASP Top 10 for LLM Applications focuses on how LLMs can be exploited or manipulated in ways that were never part of traditional IT systems.
These aren’t simply academic tools—they explain what your SOC might already be missing. AI systems behave differently. They make decisions, respond dynamically, and interact with data and users in ways that challenge conventional security logic.
What ties all these frameworks together is the need for real-time, behavior-based detection. Static rules, signatures, or lists of known bad actors are no longer enough.
Where traditional security tools fall short
Many organizations rely on tools that were built to find malware, detect unusual traffic, or block unauthorized access. These tools are important—but they weren’t designed for AI-specific risks.
Here’s what most legacy detection tools don’t catch:
- A valid user running unauthorized jobs on a GenAI service like AWS Bedrock, using up compute resources and racking up hidden costs.
- A chatbot being manipulated to leak sensitive information through a series of carefully crafted prompts.
- A subtle change in training data that leads an AI model to give biased or harmful outputs down the line.
These kinds of incidents don’t look suspicious to tools focused on files or endpoints. They appear as normal API calls, ordinary user sessions, or trusted application behavior—until it’s too late.
Putting AI threat frameworks into action
Understanding the risks is only the beginning. The real challenge for security leaders is operationalizing those insights - making them part of day-to-day detection and response.
Today, most SOCs cannot confidently answer:
- Who is accessing GenAI services across the organization?
- Are any accounts behaving unusually—like running large inference jobs late at night?
- Are any AI systems being manipulated in ways that could lead to data exposure or policy violations?
- Can we detect if guardrails on an AI model are being bypassed?
The Vectra AI Platform addresses these challenges by mapping its behavioral detection logic directly to MITRE ATLAS. This alignment helps SOC teams surface high-risk activity that traditional tools often overlook, such as:
- Suspicious access to GenAI platforms by users or identities not typically associated with such services
- Attempts to evade logging or monitoring during GenAI model interaction
- Unusual usage patterns that suggest account compromise or model abuse
Vectra’s AI-driven prioritization engine further enhances analyst productivity by automatically raising the risk profile of identities involved in GenAI activity, helping teams focus on what matters most.
Because the platform delivers agentless, identity-first visibility across hybrid cloud and SaaS environments, it is uniquely positioned to detect AI-related threats without requiring changes to the models or infrastructure. This makes it especially effective in production environments where GenAI is deeply integrated into workflows.
AI is no longer just a tool, it’s a target
As enterprise adoption of AI accelerates, attackers are adapting quickly. The techniques outlined in MITRE ATLAS are no longer niche, they are becoming common tactics for exploiting modern AI systems.
By aligning your security program with these frameworks and deploying solutions like the Vectra AI Platform, your team can move from passive visibility to proactive detection. You’ll gain the context needed to detect AI threats in real time, protect cloud-hosted LLMs, and reduce the risk that your GenAI investments become entry points for attackers.
🎧 Want more on GenAI risks in Microsoft 365?
Listen to our Hunt Club Podcast episodes to see how Vectra AI closes visibility gaps in Copilot deployments:
• Threat Briefing: How Attackers Are Bypassing SharePoint Security Using Copilot
• Threat Briefing: Copilot for M365 Attack Surface
• Product Briefing: Detecting Attacker Abuse of Microsoft Copilot for M365
Explore our self-guided demo or connect with our team to learn how Vectra AI can help you defend the future of your enterprise.