We know it’s there with us. It’s keeping track of things, learning our tendencies, assisting with various tasks, but it can still be difficult to pinpoint exactly what the AI technologies we use and exist with are doing? However, the conversation gets a lot more interesting when we look at the possibilities available and how they’re actually impacting our day-to-day lives or even the way we work. For this discussion, we’re mostly talking about what AI agents mean for those of us working in cybersecurity — a topic that also highlights the path we’re on as humans co-existing with AI and how we use it. But before we get into all that, we should probably consider updating the dictionary definition of ‘agent’ to something like:
A person, business or artificial intelligence authorized to act on another’s behalf.
Maybe we even add “service animal” to that definition? Can a dog or another type of service animal act on another’s behalf? Consider this scenario: your dog is trained to collect any dirty laundry it sees throughout the house and place it in the laundry room. Your dog is now the laundry agent, which you authorized. Now when you leave laundry on the floor, anyone who has a problem with that, can take it up with the laundry agent.
While Merriam-Webster considers my request, here’s a definition from IBM more specific to AI agents:
An artificial intelligent (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. -IBM
Simple enough especially once you break down how an AI agent (or any agent) would be assigned to carry out certain tasks, however, if you read some of the recent articles on AI agents — and there are a lot — you’ll quickly see that there are a lot of questions being raised. Some articles suggest how, “no one can clearly define an AI agent.” And maybe that’s just a clever headline from Fortune that does a good job of getting us past what we call something and more to the point of the discussion, which is how it can actually help us or what we can do with it. Other articles, like this one in Wired, raise questions about AI agents by asking — “how much should we let them do?”
All valid considerations, however, one of the interesting realizations when digging into AI agents across cybersecurity, is that they help give us a more defined glimpse into the impact AI is having and can have across our workflows. We recently learned a lot about AI adoption across cybersecurity in Vectra AI’s 2024 State of Threat Detection and Response report, however, you have to get beyond the adoption numbers to learn how AI is actually making an impact and the tasks it’s helping complete.

AI Agents in Cybersecurity
So, what are we talking about with AI agents in cybersecurity? Let’s look at it from a threat detection, investigation, and response point of view where the faster a defender can see and stop an attack, the better off their organization will be. On average, defenders receive 3,832 security alerts per day, this according to the 2024 State of Thread Detection and Response report. That number is actually down from the year prior, but when you think about what that means in terms of being able to address each individual alert — it’s an outrageous ask. Put it this way, if you had nearly 4,000 emails in your inbox each day, how many would you answer? It’s really not that surprising that the practitioners who participated in the study reported only being able to respond to 38% of those alerts on average. This means as an industry, we’re not addressing potential real incidents because we don’t have the bandwidth. Enter AI agents.

How do AI agents help defenders see and stop attacks?
If we zero in on how security practitioners divide tasks up during the average workday, we start to see some areas where AI agents become useful. For example, according to Vectra AI’s Security Team Efficiency Benchmark, security practitioners spend 18.4% of their day investigating false positives, and 27.7% of their day managing alerts. This particular study collected responses from 538 practitioners to help understand which tasks take up time during the day, but for this discussion it’s also useful to see where an AI agent might make sense. Interestingly, the study also shared that a 10-hour work day was the norm for a six-person team average. In this scenario, where would an AI agent earn their keep? There are a number of ways, but AI agents can help remove much of the manual work associated with alerts, and perhaps most importantly, elevate the attack signal that teams receive from threat detection and response tools, so they know what events pose the biggest risk. Let’s get into how they work.

Can AI agents help reduce the time spent on false positive security alerts?
Every false positive alert needs to be triaged to determine its relevance by either a human analyst or through some form of automation (if available), which is why we’re seeing close to a fifth of an analysts’ day being spent on false positives. We need to know if something is malicious or benign — which can be a highly manual process taking time and expertise. But does it have to be? The ability to apply AI to handle triage isn’t anything new, but capabilities keep getting better and now with AI triage agents in the mix, security teams can easily offload triage duties. That means using AI to evaluate alerts and separate normal network behavior from what’s likely malicious or to help determine which detections are security relevant based on entity (host or account) importance.
Can AI agents help manage the high amount of security alerts teams receive?
It’s not just the amount of alerts teams receive (3,832 each day) that make things impossible, but the complexity of modern networks spanning data centers, campuses, remote workers, clouds, identities, etc. makes the possibility of stitching together alerts across each surface unrealistic without the right technology. Attackers thrive in these environments because of the latency introduced by the effort it takes to stitch together siloed alerts coming in from every possible direction. Correlating detections or alerts across various surfaces isn’t a new concept, however, AI agents make it easy because defenders no longer need to look at every alert across each individual surface. For example, an alert in AWS could be connected to an alert in Entra ID because they are associated with the same identity — AI would know this and automatically build an attack profile that includes both alerts or any number of alerts received associated with that identity from any surface within your environment. This cuts the number of alerts teams have to address.
Can AI agents help you stop a cyberattack?
Even with a fewer number of alerts, the information most useful to defenders is always going to be — knowing which alert(s) signal an attack that’s actually occurring. As we mentioned earlier, defenders need to be able to see and stop attacks, which is something AI agents can now help prioritize. An AI agent for attack prioritization can take detections of all types of attacker behaviors seen across an environment, factor in things like how fast an attack is moving along with the techniques being used and deliver urgency ratings for all the security alerts within an environment. An AI prioritization agent can basically take everything that’s happening across an environment into account and rank them in order of severity by what poses the biggest risk. Defenders then have all of the context about each alert in one place already triaged and stitched together so they can use their expertise to investigate further if needed or move forward with stopping the attack.
For defenders, AI agents are quickly becoming more than just a way to help clean up false positives and manage alerts, but a way to gain an accurate attack signal that can be used to see and stop attacks without the latency that is introduced by doing it manually. And besides all that good stuff, who doesn’t want to say the words, “take that up with my agent”?
To get more details about AI agents and how Vectra AI uses them, watch the podcast: Accelerating Threat Detection with AI Agents.
Or
Read about how security teams are gaining real business outcomes from AI, including:
- 52% more potential threats identified
- 51% less time spent monitoring and triaging alerts
- 60% less time spent assessing and prioritizing alerts