Finding Signals in Security's White Noise

Finding Signals in Security's White Noise

By:
Mike Banic, VP of Marketing
April 22, 2014

A customer recently shared her perspective in the growing security white noise – a term she uses to describe the increasingly high volume of alerts coming out of the defense in depth security. To punctuate her point, she pulled up a recent Wall Street Journal blog with an example from Gartner analyst Avivah Litan of a client who receives over 135,000 security alerts a day. As Avivah aptly stated, "It becomes like the car alarms going off in a parking lot – no one takes them seriously because generally there are too many false car alarms."

Looking back at the Bloomberg BusinessWeek coverage of the Target breach, the article focused on multiple security alerts of the malware used to initiate the attack. While these alerts were marked as high priority, it is easy to imagine that an enterprise the size of Target may have been receiving hundreds or thousands of security alerts of varying priority that created white noise.

A second customer chipped in with another good point saying, "security products give off false positives. Too many false positives can result in ignoring the alerts which they shouldn't."

A new approach to find signals amongst the white noise would be host-based detection reporting. Here is the basic thesis: A targeted attack will have multiple phases creating several different anomalous behaviors on several hosts (e.g., servers, laptops, tablets, smartphones). Each behavior could result in a discrete detection and increase the total amount of white noise. However, all of the detections related to a single host could be aggregated and used to prioritize the greatest business risk.

Take two different hosts for example. The first may be under the remote control of an external server, have performed port scans and established a hidden tunnel to an external server. A second host may be communicating to a server located by looking up a machine-generated domain name and clicking 1000 times per second on an advert.

Both hosts should have a single report aggregating the sum total behavior by time of detection and by type of activity. The first host would be prioritized as a higher business risk because the behaviors represent a targeted attack including reconnaissance and exfiltration behaviors – very bad stuff. The second host would be a lower business risk since the attack is a more opportunistic botnet that I should clean up when time allows.

The result is better signal to noise ratio for those two host examples. There could be dozens upon dozens of discrete alerts for each that would only create white noise. In addition, each alert by itself would not tell a story. However, the aggregation and correlation of the alerts for each host tells an easily discernable story about both the targeted and opportunistic attack. More importantly, each new behavior detected is added to the existing host report and will affect a change in the risk level the attack on that host represents.

If you are willing to invest two more minutes of time to understand this more, watch this video on how Vectra works.