What’s Next for the Enterprise After Two GenAI Tidal Waves?

April 28, 2026
4/28/2026
Oliver Tavakoli
Chief Technology Officer
What’s Next for the Enterprise After Two GenAI Tidal Waves?

The Spring of 2026

Anyone from 2020 time-traveling to attend RSA 2026 would come away believing that the only thing that matters in cybersecurity is GenAI. At this year’s RSA, the GenAI narrative sounded like this:

  • The leadership team within most organizations has been pressing for the rapid adoption of AI. The orthodox belief now is that any organization which fails to do so will find itself at a material disadvantage to its competitors — classic FOMO.
  • This pressure has created the same dynamic we saw with cloud in that adoption has occurred with little thought to threat models and appropriate security controls.
  • CISOs are chasing this runaway train trying to get it under control before too many self-inflicted wounds result from insecure deployments of GenAI.
  • In NIST terms, it’s all about identifying, protecting and detecting GenAI (especially AI agents) right now — respond, recover, and governing will come later.
  • Meanwhile, GenAI turns out to be great tech for automating all sorts of security tasks and the “AI SOC.” Additionally, AI-enhanced workflows in general were being highlighted at many of the RSA booths.

Arriving home from this RSA conference on March 27, 2026, CISOs could confidently focus on (a) finding and protecting their organization’s use of AI and (b) leveraging AI to improve their security workflows.

The respite lasted exactly 11 days. On April 7, 2026, Anthropic announced to the world the existence of the Mythos Preview model — a model so powerful and dangerous that it could only be made available to a small number of carefully selected partners in a program called Glasswing. It turns out that Mythos (we’ll use this short name) is, among other things, particularly adept at finding vulnerabilities in code, crafting exploits for them but also creating patches for them. So, the idea of Glasswing is to provide a headstart to important infrastructure and security providers so that they can patch their products before these capabilities become more broadly available to individuals whose motives might be different.

Of course, it took only seven days for OpenAI to follow suit — announcing the availability of GPT-5.4-Cyber on April 14, 2026. Access to this model is not as limited as access to Mythos — it is restricted to members of the OpenAI’s TAC (Trusted Access for Cyber) program, which is substantially more permissive than Glasswing.

So, access to these models will likely grow and efforts to keep “bad guys” from getting their hands on them will ultimately fail. And while Anthropic and OpenAI may have a lead, other frontier models are likely to achieve similar breakthroughs in the coming months — more genies will escape from more bottles.

Where does this leave us?

Security professionals are left to simultaneously deal with this dual threat:

  • How to find and secure all the AI in their organization while simultaneously adopting AI to radically transform their own security workflows?
  • How to deal with the implications of Mythos (and its ilk), which imply that with access to the right model, the vulnerabilities in their systems can easily be discovered and exploited? And that the best way to reduce the risk of this exploitation is to patch everything, everywhere, all at once.

So, we’re in for a rocky year or two, or three.  

But the story that some people are selling is that on the far side of this chaos, all software will be secure by design (each release will have been scanned my magical models and be cleansed of all vulnerabilities prior to being shipped) and we will be in a state of secure software nirvana. This narrative is flawed in several ways:

  1. Frontier models get better

So, if you’ve scanned all your software with Mythos v8 and then Mythos v9 comes out, the code that is currently deployed at all your customers and was secure when it shipped is no longer considered secure because Mythos v9 can find things that v8 could not. So, each time a more advanced model comes out, it will be a mad rush to patch everything, everywhere, all at once.

  1. Looking for vulnerabilities and patching them is an economic activity

Even considering only one version of a frontier model, spending $10,000 worth of tokens will find more vulnerabilities and produce more working exploits than spending $5,000 on the same task. So as a software vendor, how much token spend to find and patch vulnerabilities is enough? Whatever token spend a software vendor chooses, an attacker could choose to use twice as much — or could use a fraction of that token spend and concentrate it on a tiny area of the product.

  1. Code vulnerabilities are not the only problem

Social engineering represents a huge attack surface which has little to do with code quality and safety. How’s your phishing training program going? Reached anything near 100% success yet?

Exploiting misconfigured systems also has little to do with code quality. Who has ever implemented Conditional Access Policy in Entra ID and believed that they got it exactly right and didn’t leave unintentional corner cases that could be exploited?

Finally, there are many cases where a feature is out in the market for several years and it is being used as intended by its creators before some bright individual figures out how to use social engineering to make it useful in attacker tradecraft. And then a flurry of activity occurs to try to make it safe again.

The best case for the future is one in which offensive and defensive cybersecurity reach a new equilibrium.  

The intervening time is particularly dangerous given the pace of change being imposed on defenders. Think of each area undergoing scrutiny as a race between you and your adversaries. Even if you are very skilled, the larger the number of races you run, the more likely you are to lose some of them.

And the rapid pace of AI adoption will further complicate things. AI agents will proliferate and sitting out the AI revolution won’t be an option. Can you find all these agents? Are they using their own non-human identities or are users supplying them with tokens so the agents can perform tasks on their behalf? Can these agents be easily convinced to do something that was never in their original design? The one truism in cybersecurity is that new tech (particularly the complicated kind) is hard to secure because we don’t even understand how and because the available controls prove inadequate.

What matters now, and how Vectra AI can help?

For the foreseeable future, most of your software stack ought to be considered moderately insecure. The first model-led vuln scanning and patching effort will create an initial deluge of patches — and project Glasswing notwithstanding, there’s no guarantee that your software vendor will beat the bad guys in every such race.

So, you will focus on what you can control.  

  • You will patch as soon as humanly possible when patches become available (easier said than done given the large number of patches).  
  • You will tighten down policy. Call it Zero Trust. Call it micro-segmentation. Remove unnecessary exposure. Identify your AI agents and box them in as much as possible.
  • You will manage your attack surface. What systems are reachable from the outside? Are they as secure as they can be right now? Can a string of text crafted by an attacker outside your perimeter easily reach and AI agent and be fed to an LLM?

Given that prevention will be far harder during this period — and that the illusion of perfect security a few years from now is just an illusion — resilience will rely heavily on your ability to detect bad things and to stop them before they go “boom”.

Vectra AI observes multiple attack surfaces (on-premises, multi-cloud, identity, SaaS, edge, IoT/OT), informs you of attacker behavior which naturally follows a successful exploit and supplies you with the source of truth (was one GB of data sent to the outside from system X?) — this will be invaluable in the years ahead. In a world where vulns are many and exploits are variable, and stopping all of them is not possible, identifying durable attacker behavior is still the best way to stop attacks that get past your initial lines of defense before they do you real harm.

What about 2030 (or some future date)?

As noted above, after a period of relatively large amounts of change (which tend to benefit attackers), we will regain a new equilibrium. It will once again be somewhat difficult to break in. We will have code that will have far less obviously exploitable bugs in it. We will have locked down policies and reduced the attack surface. We may be using Mythos v7 to make sure our posture has no obvious holes. And maybe Mythos v8 will be a key technology in our much more automated SOC.

But there will still be a SOC. Because attackers will still get in. And we will still need clear signal which flags potential attacks. And the network and identity systems will still be the source of truth. And NDR spanning on-premises, multi-cloud, identity, SaaS, edge, IoT/OT will still be critical to ensuring that small incursions do not turn into large breaches.

Have questions about Claude Mythos and Project Glasswing? Start here: Help over Hype: Claude Mythos, Project Glasswing and the Real Questions CISOs Want Answered

FAQs