The Good, the Bad and the Anomaly

November 8, 2017
Hitesh Sheth
President and CEO
The Good, the Bad and the Anomaly

This blog was originally published on LinkedIn.

The security industry is rampant with vendors peddling anomaly detection as the cure all for cyberattacks. This is grossly misleading.

The problem is that anomaly detection over-generalizes: All normal behavior is good; all anomalous behavior is bad—without considering gradations and context. With anomaly detection, the distinction between user behaviors and attacker behaviors is nebulous, even though they are fundamentally different.

Consider this: People do what it takes to get their jobs done—reading email while overseas on vacation, logging in at 3 a.m. when they wake up inspired, downloading new sets of files for a start-up project. Sometimes, this well-meaning behavior can appear suspicious.

At the same time, sophisticated cyber attackers are adept at mimicking accepted practices and blending in with normal behaviors. Consequently, anomaly detection vendors are more likely to flag good employees doing their jobs in slightly uncustomary ways than identifying and exposing an attacker.

Can you say “false positive?”

You can compare anomaly detection to the law-enforcement practice of stop-and-frisk. For example, back in 2015, 99.5% of stop-and-frisks of suspicious people in New York City turned up no gun. Tens of thousands of searches and only a handful of weapons.

To paraphrase one observation, stop-and-frisk makes up for its inaccuracy by being resource-intensive and inefficient. But contrast stop-and-frisk with T-ray detection, which unobtrusively and instantly detects a thermal image of a person. If there is a concealed weapon, T-ray will show a cold gun-shape in contrast to the warm body.

To wrap up the analogy, anomaly detection employs the cybersecurity equivalent of racial profiling, flagging anything that’s generically different.

Vectra, on the other hand, is the T-ray, using AI to distinguish overly general and easily misleading anomalous behaviors from the salient, very specific identifiers of attacker behaviors.

Anomaly detection vendors require cybersecurity analysts to scrutinize every suspicious event, real or not. That approach is the antithesis of “where there’s smoke, there’s fire.” When it comes to anomalous behavior, there’s tons of smoke with no fire, and security analysts must chase after every wisp, burning-up time and money going after every false lead while remaining blind to the real threats.

Who has the time and money for that? And more importantly, who wants to risk their intellectual property and company reputation to such a flawed approach?

The inside job

The indicators for insider threats can be just as misleading. Yes, there have been some high-profile hacks that featured anomalous behavior, such as the reported leak of classified information by Edward Snowden. But the clear majority of insider attacks were successful because they blended in with normal behaviors and were only discovered long after extensive damage was done.

In the fraudulent-account scandal at Wells Fargo, employees appeared to be doing their jobs—and doing them a bit ‘too well’ as it turned out. They knew and used the standard processes. They used their credentials appropriately. They didn’t overstep their access or authorization.

Advanced cyberattacks behave the same way. They blend in and unless the security team is focused on looking for attacker behaviors and not just generalized anomalies, they have no realistic chance of getting ahead of these attacks.

And that is the ‘ugly’ truth about generalized anomaly detections.