I was halfway through a post-Christmas coffee when the first message came in.
Not a “Merry Christmas.” Not a backlog reminder.
A CVE alert.
Within minutes, Slack lit up. Analysts who were supposed to be offline were suddenly back online. Dashboards refreshed. Exposure assessed. Systems identified. Isolation. Patching. All of it, in a hurry.
That’s how 2026 really starts for security teams.
Not with resolutions.
Not with strategy decks. But with a reminder that attackers don’t take holidays.
Now, as we start this new year, it’s time to think about New Year’s resolutions. Personally, that usually means less coffee and more sleep. Professionally, it means something else entirely: predictions!
Spoiler alert: 2025 was already all about AI. But if you think we’ve hit peak hype (or peak impact) you haven’t been paying attention.
2026 is going to be even bigger. The momentum hasn’t slowed. If anything, it’s accelerating. AI is unlocking new capabilities almost daily, on both sides of the battlefield. And while that’s exciting… it’s also deeply uncomfortable.
So, let’s talk about what’s coming.
Prediction #1: AI-Powered Offense Is About to Get Bigger (and Very Scary)
Let’s start with the obvious warning.
Offensive security with AI is going to give defenders a hard time.
In 2025, we crossed an important threshold: for the first time, a fully autonomous, AI-driven penetration tester—XBOW—reached #1 on HackerOne. That alone should make everyone in security pause. This wasn’t a proof-of-concept. It was a signal.
Throughout the year, we’ve seen an explosion of AI red team research and tooling around:
· Custom LLMs
· Single-agent systems
· Multi-agent architectures
· Long-term memory workarounds for LLM context limits and other limitations.
And the pace hasn’t slowed. In fact, some of these systems are now competitive outperformed human professionals. Paired with ever smarter models, these tools are becoming increasingly mature and sophisticated, allowing AI to be applied across every phase of the kill chain.
At the same time, we’ve seen sobering research from large AI vendors themselves. Anthropic, for example, published detailed findings showing how their models have already been misused in the wild—from malware development to social engineering at scale (here and here! It is worth the read to know what we are facing today!!)
Guardrails help. But they are not enough robust yet! Open-source models don’t have them at all. The result?
AI Attacks are:
· Easier to execute (lower skill barrier)
· More complex (multi-stage, multi-domain)
· More evasive (polymorphic payloads, adaptive behavior)
· Autonomous (self-driven)
· Adaptive
· Relentless (no need to sleep! Keep trying!)
· Massively scalable!!
As AI-driven attacks evolve and become increasingly efficient, we can expect both the volume of attacks and the speed of execution to increase significantly. The time between initial access and a breach will likely be even shorter than it was in 2025.
Adversaries no longer need a team. They can stand up an army of agents. This reinforces something defenders already know but can no longer ignore: AI-driven detection, triage, and prioritization are no longer optional. They are critical to deal with the increasing volume of attack, at speed!
Speed and scale matter. And humans alone cannot keep up!
Prediction #2: The AI SOC Is Real
AI agents didn’t quietly arrive in the SOC in 2025 — they exploded.
The emergence of agentic architectures, combined with the rise of Model Context Protocol (MCP), fundamentally changed what’s possible. AI agents can now connect to tools, query live data, enrich context, and even take action. At Vectra AI, we leaned into this reality by releasing MCP servers for our platforms (here and here) because if AI is going to help operate the SOC, it needs first-class, programmatic access to security data.
In 2025, we saw:
· Established SIEM and SOAR vendors repositioning around “AI-driven SOC” narratives
· New startups emerging as pure-play AI SOC platforms
· Growing CISO and board-level interest driven by the never-ending problem of alert volume, investigation time, and analyst burnout/turnover.
The need is obvious and according to Gartner’s 2025 Cybersecurity Innovations survey, 46% of organizations plan to start using AI agents in security operations in 2026.
But here’s the catch.
Adoption will lag maturity — not because the technology isn’t ready, but because trust, governance and skills are not!
I’ve spoken with enough security leaders over the past year to see the pattern clearly. Everyone agrees AI belongs in the SOC. Very few are comfortable getting started and letting it operate autonomously in production.
And that hesitation is justified.
Building, deploying, securing, and maintaining agentic SOC infrastructure is hard. MCP is being adopted faster than it’s being secured. Almost every SaaS provider now exposes some form of MCP capability. AI agents are being granted access to tools that were never designed with agentic use in mind. In 2025, we already saw the warning signs: The Asana MCP vulnerability was a wake-up call
Boards are now asking uncomfortable, but necessary, questions:
· What permissions do our AI agents actually have?
· Who can audit their actions?
· What’s the blast radius if an agent is compromised?
· Can we even see which tools they’re calling?
This is where the conversation shifts.
In 2026, AI agent governance becomes its own security problem — and its own product category.
We’re already seeing early signals:
· Startups like RunLayer and Aira Security focusing on MCP visibility and control
· Capabilities centered on:
o Auditing MCP server connections
o Monitoring agent tool calls
o Enforcing least privilege for AI agents
Just like cloud security had to mature rapidly, agent infrastructure will need:
· Guardrails
· Policy enforcement
· Observability
· Compliance by design
If 2025 was about connecting AI agents to everything, 2026 will be about regaining control.
For platforms like Vectra AI, this evolution matters deeply. The Vectra Platform already sits on a wealth of high-fidelity data, attack signals, network metadata, identity behaviors, investigation context. Making all of that data accessible programmatically
and efficiently to LLMs isn’t just an architectural choice, it’s a prerequisite for a functional AI SOC. Check out my blog from a couple of weeks ago regarding LLM efficacy for SOC use cases!
The AI SOC is real but in 2026, governance will separate experimentation from production and ambition from resilience.
Prediction #3: Shadow AI and Agentic IAM Redefine the Attack Surface
Two challenges quietly exploded in 2025.They are not going away.
· Agentic IAM
· Shadow AI
Employees are deploying copilots, LLMs, and autonomous agents—often without security involvement. These systems access data, make decisions, and act on behalf of users in ways traditional IAM was never designed to handle.
The impact is profound.
Identity is no longer just human vs. machine.
It’s agent acting on behalf of… something else.
Shadow AI isn’t just a governance problem.
It’s an attack-surface multiplier.
If you’re still modeling threats as:
user → device → application
you’re already behind.
In 2026, security teams will be forced to:
· Track agent identities explicitly
· Understand what data agents can access and move
· Detect misuse that doesn’t look like compromise
Underestimating this shift would be… an understatement.
And once again, full network visibility becomes the safety net—the one place where “unknown unknowns” eventually show themselves.
Prediction #4: Hybrid, Multi-Vector Attacks Become the Default
Finally, let’s bring it all together.
Attackers don’t care about:
· Your org chart
· Your tool boundaries
· Your SOC silos
They see one giant attack surface.
This isn’t new—but in 2025, it became undeniable.
We saw groups like Scattered Spider, Storm-0501, APT41, and Volt Typhoon execute campaigns that blended:
· Cloud and on-prem compromise
· Identity abuse and malware
· Fraud techniques and intrusion tactics
Ransomware groups are evolving into fully automated extortion platforms:
· Discover targets automatically
· Chain exploits
· Encrypt and exfiltrate
· Apply DDoS pressure when needed
Several forces are driving this shift:
· Ubiquitous hybrid environments with fragmented visibility
· AI-assisted orchestration of complex campaigns
· Living-off-the-land and credential abuse that blends into normal activity
Prediction for 2026:
Visibility and defense must be unified across the entire kill chain.
Siloed tools—and siloed teams—won’t survive this shift.
Hybrid attacks demand hybrid thinking.
Final Thought
As we head into 2026, one thing is certain: the security problem is no longer just technical. It’s architectural.
AI is reshaping offense, defense, identity, and operations at the same time. The old assumptions—clear perimeters, human-paced investigations, static trust models—are breaking down faster than most organizations can update their playbooks.
Success in this next phase won’t come from adopting more tools. It will come from building security programs that are observable, adaptable, and resilient by design—even when parts of them are run by machines.
AI is here. The question is whether your security strategy has caught up.

