The Russian proverb Trust, but verify (Доверяй, но проверяй) was made popular by President Ronald Reagan in the context of nuclear disarmament discussions with the then-Soviet Union.
Since then it has stuck around both in and outside of political discussions. Trust is inherently what we have to do when we can’t be absolutely certain. However, we often have to extend trust in order for business to get done.
In infosec, the concept of “zero trust” has grown significantly in the last couple of years and has become a hot topic. A zero-trust architecture fundamentally distrusts all entities in a network and does not allow any access to resources until an entity has been authenticated and authorized to use that specific resource, i.e. trusted.
However, this single point approach to time-security gating is fundamentally flawed, as cyberattackers in a network can easily abuse the trust extended by leveraging credential theft, privilege escalation, or policy misconfiguration to gain further access to resources. This was recently demonstrated with the RobbinHood ransomware attacks earlier this year targeted against the cities of Baltimore and Greenville, N.C.
According to Gartner, “security and risk management leaders need to embrace a strategic approach where security is adaptive, everywhere, all the time. Gartner calls this strategic approach “continuous adaptive risk and trust assessment,” or CARTA and “with a CARTA strategic approach, we must architect for digital business environments where risk and trust are dynamic and need to be assessed continuously after the initial assessment is performed.”
“Once allowed into our systems and data,” Gartner continues, “these entities – users, application processes, machines and so on – will interact with our systems and data, and all of these interactions must be monitored and assessed for risk and trust as they happen.” *
In our opinion, this is the “…but verify” part of the trust model. If the trust drops or the risk increases to a threshold requiring a response, access to the capabilities extended should adapt accordingly. Ultimately, the network itself holds the immutable truth of what’s really going on.
The Cognito platform has recently been updated with something we call Privileged Access Analytics (PAA), which we think resonates well with the CARTA framework. PAA continuously monitors the behaviors of users, hosts and services.
But rather than relying on the granted privilege of an entity or being agnostic to privilege, the Cognito platform focuses on how entities actually utilize their privileges within the network, i.e. observed privilege.
This viewpoint is similar to how attackers observe or infer the interactions between entities before they continue an attack. As a result, Cognito is able to deliver both a continuous real-time assessment of privilege levels of users, hosts and services, as well as threat scores, certainty and prioritization of risk. PAA is currently available across the entire Cognito platform – Cognito Stream, Cognito Detect and Cognito Recall.
In addition to PAA, the Cognito platform continuously monitors hosts for network-level anomalies, such as hosts performing internal reconnaissance, lateral movement and data exfiltration. Combining this with PAA makes Cognito uniquely capable of providing complete visibility into all traffic – from cloud to enterprise.
* Gartner, “Seven Imperatives to Adopt a CARTA Strategic Approach,” Neil MacDonald, 10 April 2018
Marcus Hartwig is a senior product marketing manager at Vectra. Has been active in the areas of IAM, PKI and enterprise security for more than two decades. His past experience includes product marketing at Okta, co-funding a company in cybersecurity professional services, as well as managing a security product company – a combination that has left him passionate about all parts of product marketing, design and delivery.