The idea that any type of technology would autonomously make decisions and complete tasks on behalf of humans isn’t necessarily an easy pill to swallow — at least not without some level of understanding about what exactly the tech is doing. Perhaps that’s why it seems that every tech trade show, meetup, and conference is focused on topics relating to AI and more recently, agentic AI? We talked a bit about what AI agents mean in cybersecurity in my last post, but today I’d like to go beyond the buzzwords and into how AI is quickly becoming the right tool for defenders tasked with stopping modern cyberattacks, especially when it’s applied to the right problem.
To do that, let’s raise some broad threat detection and response questions and discuss where both agentic AI and Gen AI fit — because why not use the two buzziest of buzzwords for this exercise? As a reminder, AI agents (agentic AI) are capable of performing tasks on behalf of a user, while Gen AI refers to AI focused on creating content such as text or images — an LLM (Large Language Model) for example is a type of Gen AI that can generate text.
Can Gen AI help defenders detect and stop modern cyberattacks faster?
According to the CrowdStrike 2025 Global Threat Report, the average time from infiltration to when attackers begin moving laterally inside a network is 48 minutes, which is actually down from 62 minutes in 2024. Attackers are getting faster. So when I think about if Gen AI can help detect and stop a modern cyberattack, my initial reaction is “no” considering that it’s difficult to see how an LLM, for example would speed up detection through content generation — at least on the surface. However, it turns out that Gen AI is proving to be a valuable resource to the folks building detection models.
As Matt Silver, VP of Data Science at Vectra AI explains in the podcast: Quantify Your AI Force Multiplier: Entities and Detection — Gen AI can be used to learn representations of benign security data, which can be useful for “training detectors downstream.” Basically, threat detections are built to detect super specific attacker behaviors, however, part of doing that accurately is also knowing what behaviors aren’t malicious. Because of the ability that Gen AI provides in processing large data sets, we no longer have to build data sets from scratch to use in detection modeling, which can be extremely time consuming. Now we’re able to leverage massive amounts of existing cybersecurity data and apply self-supervised pretraining to help build detection models at a much faster rate. If we think about how fast modern cyberattacks move, there’s certainly value in removing any latency we can during the detection engineering process.
Why are defenders adding agentic AI into their tool kits for stopping modern cyberattacks?
There’s that buzzing again. But, if I am putting myself in a security analysts’ shoes, this is probably where I make sure to tune into the conversation. Similar to how I can use Gen AI as a writer and content person to do some or all of the grunt work that comes along with putting a piece of written content together (had I used it on this post, it would have been out sooner), agentic AI is pulling up alongside analysts and saying, “hi, want me to take a look those three thousand or so alerts and let you know which ones you need to address?”
Of course there’s a lot more to it on the backend, which this podcast covers in detail, but AI agents are really just about handling the things you can’t get to, don’t want to do, or would like to offload because your time and expertise could be used more effectively somewhere else. Defenders can use AI agents to help determine which detections or alerts are tied to specific hosts or accounts, so you know which are relevant. They can also stitch together detections across network, identity, and cloud surfaces so you know which ones are related, or even deliver an urgency rating, so you know which activity poses the biggest risk to an organization. AI agents get defenders closer to having detection context faster, which of course means closer to stopping attacks rather than spending cycles on manual triage duties, for example.
It's interesting to think about all the tools, apps, or programs we use in our jobs regardless of what our job is. Most of us probably use the same tools we have for years or when we do introduce a new one — it’s because we feel it will make us better at our jobs and the value outweighs the cost or time investment to adapt to something new. For all the fancy buzzwords around AI right now, it’s just a tool that we can use to help us do our jobs and depending on the outcomes we seek — it might just be the right one for the job.
For more conversations about AI in cybersecurity, check out the AI in Action show on the Vectra AI YouTube channel.