Not All AI is Created Equal

February 1, 2022
Kevin Kennedy
Senior Vice President of Products
Not All AI is Created Equal

In the first blog in this series, we decoded some of the AI/ML buzzwords and explained why Vectra proudly embraces the “AI” term for what we do. Now, we’re going to take a closer look at the methodologies for applying AI to threat detection and response. Specifically, we’ll contrast the security-led approach that Vectra has pioneered with the math-led approach that other vendors use. You can also find an in-depth explanation of how Vectra uses data science and AI to detect and stop attackers in the white paper, The AI Behind Vectra AI.

Security-led vs Math-led Detection

Two main approaches are used to apply AI/ML to threat detection: security-led and math-led.

Math-led Approach Detects Basic Anomalies

The math-led approach starts with data scientists generating basic statistics and novelties about the environment. How common or rare is a destination, domain or IP? What is the typical count of SMB connections per hour for this IP? Has this account ever logged in from this IP? Security researchers then use these stats to build hundreds of rules — really signatures — that have a basic anomaly component.

For example, the heart of a command-and-control (C2) tunnel model will look for multiple connections in a given time period to a destination that’s both new for this network and rare (not many systems communicate with it). There will likely be tens of these models varying the connection rates, rarity and newness attempting to balance coverage and noise.

At first blush, this sounds like a reasonable approach. Sure, the result is a patchwork of a LOT of narrow models for the team to understand, maintain and tune. And it can be defeated by an attacker who contacts the destination and then waits a couple of days, but how often will that happen? Plus, it can be defeated by common techniques like domain fronting, which will bypass detection by bypassing the rarity requirement. These limitations are troubling, but maybe we can still convince ourselves that it’s an acceptable tradeoff.

The real issue, though, is that even with so many constraints in place, these models remain very noisy. More filters must be added. Filters equal blind spots. Real models from well-known vendors in this space filter out all destinations that are in common clouds AND all sources that are mobile/tablet, infrastructure (routers, firewalls, etc.), and IoT like IP phones. These massive blind spots must be added just to reduce noise enough that the system is viable.

We don’t think that’s good enough.

Security-led Approach Finds Attacker Methods

The security-led approach flips things its head. Rather than starting with statistics, it starts by understanding the problem that needs to be solved. This means security researchers defining the important attack methods that we need to be able to detect — aligned with frameworks such as MITRE ATT&CK and D3FEND. It’s important to note that this is NOT focusing on specific tools or exploits, but rather the underlying methods at play. Methods change very slowly over time, making them stable anchors for detection.

Once the attack method is clearly defined, security research and data science partner closely to build an accurate detection model: evaluating the data, picking the right ML approach for the data and problem, building, testing and refining on an ongoing basis to ensure accuracy.

In the security-led methodology, the C2 tunnel problem is solved with a recurrent neural network (LSTM) trained with tens of thousands of samples of tunnel traffic from many different tools doing many different things, along with a large corpus of non-tunnel traffic. The LSTM neural network actually learns (this is deep learning) what a tunnel is in any network, using any tool. It works even on encrypted traffic and has no blind spots based on either the destination or the type of source system.

Counterintuitively, using a security-led methodology actually frees up the data science team to unleash the big guns — specialized approaches for specialized problems. Like using a recurrent neural network for tunnel detection. The math-led approach, on the other hand, forces the data scientists to the least common denominator, using simple, generic statistical anomalies that the security side can plug in to a bunch of different models without a deep understanding of what they do.

Security-led requires more cross-domain expertise, special data, more time and a platform that has the flexibility to run this type of model at speed. This is why many vendors shy away from it in spite of the fact that it delivers better security outcomes.

Why it Matters

The usefulness of threat detection comes down to speed. That doesn’t just mean that the system must operate near real-time (although this is important), it also means that the system needs to detect useful things while not burying the operator in noise.

At the heart of it, the security-led approach provides better coverage for what matters (reduces blind spots!) while creating far fewer alerts for teams to sift through — an 85% reduction according to one customer who recently switched from a well-known math-led vendor to Vectra.

In our next blog, we’ll look at the coverage difference between security-led and math-led on the most important security problem today — ransomware (or more accurately, ransomOps).

 

Gain more insight about the security-based approach from Vectra and how it stacks up against our competitors.

And, make sure to read the free white paper, The AI Behind Vectra AI, to see how data science and AI can give defenders an edge over cyberattackers.

FAQs