Detecting Cyberattacks Before They Succeed

November 18, 2022
Stijn Rommens
Director Security Engineering, Vectra AI
Detecting Cyberattacks Before They Succeed

Successful Cyberattacks Still on the Rise

Reading through Verizon's 2022 data-breach investigations report, it becomes obvious – sad as well – that 2021 was once again a year with considerable successful cyberattacks:

  • We saw breaches leveraging zero-day vulnerabilities and critical CVEs, some multiple ones in MS Exchange (Hafnium) and with Log4J.
  • The year after SolarWinds, once again there were quite a few supply-chain attacks occurred, like Kaseya.
  • Very often, however, many breaches were still being caused by simple errors and misconfigurations.

The result or "end-game" of such breaches – very often linked with an Advanced Persistent Threat (APT), or should we today say "highly evasive" APT — is to steal data or to plant ransomware. Quite often, in fact, both are leveraged parallel, the so-called "double extortion."

A closer look at the rubric of ransomware shows that Verizon observed an increase of 13% YoY, making ransomware responsible for 25% of all cybersecurity incidents.

As to the modus operandi of an APT, we see that such an attack mostly, if not always, turns out to be a phased operation. This triggers the question as to why there's still so much focus on the outcome of an attack, ransomware, rather than discussing the similarities that can be observed in getting to that "end-game."

On top of that, all companies that get breached already have a mix of security solutions deployed, ranging from the network to the endpoint, leveraging signatures and the "new gold standard," AI/ML.

The Problem of Preventive Cybersecurity Strategies

So, why is this still happening over and again? Why can't we detect and prevent such attacks earlier in the sequence before they succeed?

Let us concentrate on some research figures to add some context. For example, let's review the number of Common Vulnerabilities and Exposures (CVEs) we must deal with:

  • 20k+ new CVEs were disclosed last year, up about 10% from the previous year.
    According to Tanium, the oldest one discovered was 21 years old and surfaced in SNMPv2, which is still widely used to manage devices on an IP network.
  • More than 10% of these CVEs are rated as "critical”. That's 2000+! Or over 165 per month. Even if all of them are not applicable to your environment, you still have to deal with many of them.
  • In Q1 of 2022, the National Vulnerability Database (NVD), which observes CVEs, already holds over 8,000 CVEs (25% up over the same period last year).

The stats above are impressive as they indicate the impossible task of researching, detecting, and patching all of these in the respective environment. This is simply not possible because of the quantity but also because of the risk of taking critical systems offline long enough to patch them. Not to speak of systems you can't patch at all because you dare not touch them …

The problem of preventive and corrective strategies becomes even more obvious when you start looking at attacks that actually occurred, leveraging these CVEs:

  • 75% of the attacks in 2020 used over 2-year-old vulnerabilities, Check Point reported in their 2021 Cyber Security Report.
    18% were even at least 7 years old …
  • Most of these critical CVEs had been weaponized/exploited long before the CVE was published and made available to threat hunters.
    According to Palo Alto Networks, we are talking about 80% of the public exploits published before the CVE went public.
    There was an average of 23 days before the CVE was published.
  • Finally, the Mean Time To Remediation (MTTR) is still around 58 days, according to Edgescan.
    And don't forget to add the average of 23 days before the CVE was published.

A sober look at these data points reveals, I dare say, that signature-based approaches, either to prevent threats or detect them (after the fact), are simply not enough to keep your estate free of cyberattacks.

It's clear that many initial breaches are not detectable by mainline preventive capabilities – either because they are not known or because they can evade existing preventative capabilities.

In addition, recent research by the Berlin-based SRLabs revealed that EDR evasion is a real issue and can now be streamlined. They point out that it is no longer a "craft." Their conclusion is quite stunning and should be read as a wake-up call: "Overall, EDRs add about 12% or 1 week of hacking effort when compromising a large corporation, based on the typical execution time of a red team exercise."

In other words, EDR is not the holy grail: It's no more than a required component of a layered security strategy.

Let's Overhaul Our Best Practices in Cybersecurity

To mitigate the problem, we need to overhaul our cybersecurity best practices:

  • Zero-trust concepts are not enough, also because of the complexity involved in fully deploying them.
  • Layered security is a must.
  • Preventive capabilities alone are also insufficient: There's always going to be at least one maze in the net.
  • One must invest in a non-evasive network detection capability, combined with a comprehensive orchestration practice to respond.

It's a must for every company concerned about business continuity to look at effective data science as a tool for scaling their cyber security operations practices and ensuring their effectiveness.

Data science, however, cannot easily process an overload of threat indicators. The economic implementation is to leverage data science to work your way up from not processing many indicators quickly to getting the real value by revealing only a limited number of highly valuable and trustworthy security incidents.

That is the only way to find meaningful incidents before they become a breach. It's actually very simple.

Everyone knows the challenge of finding a needle in a haystack. Working your way through the hay is your limiting factor, even if data science can help you process more hay. But wouldn't it be better not to produce a haystack to process in the first place, so that you see the needle lying in front of you?

Leveraging data science means not focusing on building blocks, like finding a zero-day vulnerability or an exploit using one. They are too numerous and never-ending. The better choice is to use the potential of data science to understand and see what an attacker could do once they apply one of those vulnerabilities to land on a system. In other words, use it to identify TTPs and apply it in conjunction with a good framework, like the one from MITRE.

The available actions or tactics after the zero-day are not numerous but very steady.

Like Hansel, who strove to create a trail back home for him and Gretel using cobblestones, an attacker too creates a trail within your network. Vectra can help to identify that trail and turn the cobblestones into bread crumbs so that the attacker gets lost and can be identified.

Conclusion

Don't try to prevent the attacker's end-game; rather, focus on detecting and blocking the path being created before an end-game can even happen.

Sources:

https://www.verizon.com/business/resources/reports/dbir/

https://www.comparitech.com/blog/information-security/cybersecurity-vulnerability-statistics/

https://arstechnica.com/information-technology/2022/08/newfangled-edr-malware-detection-generates-billions-but-is-easy-to-bypass/