Security vendors throw around “artificial intelligence” (AI) and “machine learning” (ML) so casually that to many practitioners the terms have lost meaning. Worse, some overpromise to the point that it discredits the space overall. (One vendor that likes to claim fully autonomous detection and response comes to mind!)
Despite the overhype, a simple truth remains — AI is an incredibly powerful tool that helps security teams find and stop modern attacks early. This includes the ability to stop the ransomware and ransomOps attacks that are top of mind today.
In this blog series, we’ll go beyond buzzwords to explain the foundations, methodologies, and — most importantly — outcomes that AI-driven threat detection and response offer.
Beyond the Buzzwords
“AI is BS, but ML is great.” This sentiment isn’t rare. But what do the terms even mean and why and how are they different?
“Artificial intelligence” was coined in the 1950s to describe any system that can approximate human thinking. General AI refers to an intelligent agent that can learn any intellectual task that a human can. Think HAL 9000 from “2001: A Space Odyssey.” Many jump straight to this definition, but it remains almost entirely in research labs today.
Virtually all commercial applications are Applied AI, which solve a specific problem e.g., translating a language, driving a car, recognizing a face, or detecting a security threat. All security AI is Applied AI.
AI systems can be built using a variety of techniques. Some of the earliest, called expert systems, were basically giant CASE statements. Today, the most common approach to building AI is machine learning — algorithms that look at a bunch of data and learn to make predictions about newly arriving data.
There are many, many different algorithms and ML techniques available. No one technique is inherently better than another — it all depends on the problem that’s being solved and the data that’s available. In fact, we use over 50 different ML techniques in our products, each chosen to optimize the security outcome for our users.
Supervised learning techniques operate on labeled data. Train it with a group of pictures labeled “cats” and “dogs,” and it can determine whether a picture it hasn’t seen yet is of a cat or a dog.
Unsupervised learning techniques are useful and necessary when there is no labeled data available. Unsupervised techniques find structure in data, clustering similar things even if they can’t label them. Train it with a bunch of animal pictures and it will cluster together the lions, the zebras, and the bears, even though it won’t be able to say what any of those clusters are.
So…Why Does Vectra Use “AI”?
All AI is not ML, and all ML is not AI. So, where’s the line? At one level, the distinction is moot, but what really matters is security outcomes. However, since we have leaned in on using AI, we’ll explain.
It’s quite simple — our system bakes in the expertise of our security researchers and analysts. The system “thinks” like they do. This is at the core of our security-led methodology. Each detection model starts with a security researcher defining the problem statement — the attack method that we need to be able to find. The models are then specifically designed from the ground up — from data, to algorithm, to outcomes in order to accurately find the attack method. When we build prioritization algorithms, they’re designed to replicate the priority calls that our analyst team makes when reviewing incidents.
Not every system takes this approach. In fact, we don’t know of any other vendor who does it this way. It’s expensive and hard, but we feel strongly that it is the best way to deliver the outcomes that our customers deserve. And we proudly call it “AI.”
In our next blog, we’ll dive deeper into the detection methodology to contrast Vectra’s unique security-led approach with the math-led, basic anomaly approach that other vendors use.
In the meantime, you can see how we do it with the Cognito Platform.