in-depth analysis

Threat detection and response in cloud environments

Differences in detecting threats from traditional environments

Cloud environments change fundamental assumptions in how to perform threat detection and response. A growth of new infrastructure and deployment models results in new environments with new security models and attack surfaces. In particular, shared responsibility with a cloud service provider (CSP) creates potential threat visibility gaps in the attack lifecycle.

On-premises deployments involve data centers that leverage a virtualized infrastructure owned by the enterprise. In this model, an enterprise is responsible for the entire security stack, from physical devices to data. However, the dynamics of the cloud are different. Let’s take infrastructure-as-a-service (IaaS) as an example. In this case, virtual data centers replicate existing internal data centers. More specifically, physical segregation of hardware is not possible and requires hypervisor-level capabilities to create security zones.

Further, when choosing between managing the infrastructure in a private or public cloud, most organizations find themselves with a hybrid cloud, a combination of the private and public cloud with shared resources and distribution components. Usually, the critical back-end infrastructure is private and the access is public.

In this environment, the highly dynamic inventory of cloud workloads means systems come and go in seconds. A heavy focus on automation amplifies the potential for human error in system configurations. And everything in the cloud is moving to an API data access method, and traditional approaches to monitoring traffic flow no longer apply. Because of these dynamics, confidential information can be exposed to other users or to the CSP because no control is provided over existing hardware. Controls must be applied using encryption and external key management designed for virtual environments.

Most critically, the introduction of multiple access and management capabilities creates variability that adds significant risk to cloud deployments. It is difficult to manage, track, and audit administrative actions when those users can access cloud resources from inside or outside the corporate environment.

"fills a big cybersecurity void in the public cloud""cuts threat investigations from days to minutes"

Attackers have two avenues of attack to compromise cloud resources. The first is through traditional means, which involves accessing systems inside the enterprise network perimeter, followed by reconnaissance and privilege escalation to an administrative account that has access to cloud resources. The second involves bypassing all the above by simply compromising credentials from an administrator account that has remote administrative capabilities or has CSP administrative access.This variability in administrative access models means that new security threats will access unregulated endpoints used for managing cloud services. Unmanaged devices used for managing infrastructure exposes organizations to threat vectors like web browsing and email.

When the main administrative account is compromised, the attacker does not need to escalate privileges or maintain access to the enterprise network because the main administrative account can do all that and more. How does the organization ensure proper monitoring of misuse of CSP administrative privileges?

Organizations need to review how the system administration and ownership of the cloud account is handled.

  • How many people are managing the main account?
  • How are passwords and authentication performed?
  • Who is reviewing the security of this important account?
cyberattack lifecycle in the network and cloud

Most importantly, how does an organization monitor for the existence and misuse of administrative credentials? A lack of visibility to back-end CSP management infrastructure means cloud tenant organizations need to identify misuse of CSP access within their own environments when used as a means of intrusion.

Attack lifecycle in the cloud

Cloud environments change fundamental assumptions in how to perform threat detection and response. A growth of new infrastructure and deployment models results in new environments with new security models and attack surfaces. In particular, shared responsibility with a cloud service provider (CSP) creates potential threat visibility gaps in the attack lifecycle.

On-premises deployments involve data centers that leverage a virtualized infrastructure owned by the enterprise. In this model, an enterprise is responsible for the entire security stack, from physical devices to data. However, the dynamics of the cloud are different. Let’s take infrastructure-as-a-service (IaaS) as an example. In this case, virtual data centers replicate existing internal data centers. More specifically, physical segregation of hardware is not possible and requires hypervisor-level capabilities to create security zones.

Further, when choosing between managing the infrastructure in a private or public cloud, most organizations find themselves with a hybrid cloud, a combination of the private and public cloud with shared resources and distribution components. Usually, the critical back-end infrastructure is private and the access is public.

In this environment, the highly dynamic inventory of cloud workloads means systems come and go in seconds. A heavy focus on automation amplifies the potential for human error in system configurations. And everything in the cloud is moving to an API data access method, and traditional approaches to monitoring traffic flow no longer apply. Because of these dynamics, confidential information can be exposed to other users or to the CSP because no control is provided over existing hardware. Controls must be applied using encryption and external key management designed for virtual environments.

Most critically, the introduction of multiple access and management capabilities creates variability that adds significant risk to cloud deployments. It is difficult to manage, track, and audit administrative actions when those users can access cloud resources from inside or outside the corporate environment.

"if an attacker is in your network, how would you know? Cognito shows us what's hidden""with Cognito from Vectra, c=we can stop threats before they cause damage"

Analysis of a real cloud attack

The APT10 group has been credited for a tactical campaign known as Operation Cloud Hopper, a global series of sustained attacks against CSPs and their customers. These attacks aimed to gain access to sensitive intellectual and customer data. US-CERT noted that a defining characteristic of Operation Cloud Hopper was that upon gaining access to a CSP, the attackers used the cloud infrastructure to hop from one cloud tenant to another, gaining access to sensitive data in a wide range of government and industrial entities in healthcare, manufacturing, finance and biotech in at least a dozen countries.

Vectra - Cyberattack Lifecycle

The Cloud Hopper attack lifecycle

In Operation Cloud Hopper, attackers initially used phishing emails to compromise accounts with access to CSP administrative credentials. This is the most common method of infection for any attack and is still the easiest way of getting initial access to a network. The attacker would leverage malware designed to collect the necessary credentials to pivot directly into the CSP and client managed infrastructure.

Once access is attained on the management infrastructure, PowerShell could be used inside client managed infrastructure for command-line scripting to perform reconnaissance and gather information used for lateral movement to get access to additional systems.

The attackers continued to leverage compromised credentials to cross security boundaries, effectively using cloud service providers as a step to gain access to corporate data of multiple organizations. To ensure persistent connectivity to the cloud infrastructure in the event an administrative account no longer worked, the attackers installed remote access trojans for command and control to sites spoofing legitimate domains.

These were open source, off-the-shelf malware used in many attacks like Poison Ivy and PlugX. Many of the systems compromised with remote access were non-mission critical, which could be used to continue lateral movement and avoid detection by system administrators. The final stage of Operation Cloud Hopper was data exfiltration of intellectual property. Data was collated, compressed and exfiltrated from the CSP infrastructure to the infrastructure controlled by the attackers.

As CSPs take on responsibilities from tenants in the managed infrastructures, the amount of control and visibility those cloud tenants maintain diminishes. APT10 took advantage of this diminished visibility and leveraged credentials and systems that had access to both CSP and enterprise infrastructures.

Because cloud tenants do not have visibility or control in the CSP infrastructure itself, it is a formidable challenge to monitor and detect attackers who access one system then quickly pivot within the CSP infrastructure to access another system. It is important to note that the complexity of hybrid environments that involve CSPs and on-premise systems makes it difficult to adequately address problems like stolen credentials or lateral movement by attackers from a cloud tenant to a CSP and then to a second cloud tenant. One careless and inattentive cloud tenant can increase the risk for other cloud tenants who exercise greater diligence.

Key takeaways

In the APT10 Operation Cloud Hopper attack, the method of initial intrusion was cloud specific, but the attack behaviors within those cloud environments are consistent with behaviors found in private cloud and physical data centers.

This is because all attacks must follow a certain attack lifecycle to succeed, especially when the goal is data exfiltration. Preventing a compromise is increasingly difficult but detecting the behaviors that occur – from command and control to data exfiltration – are not. More importantly, when an attack is carried out in hours rather than days, the time to detect becomes critically more important.

A key takeaway from the shared responsibility model is that regardless of the data center model deployed – infrastructure, platform or software as a service – the enterprise organization is always responsible for data, endpoints, accounts, and access management.

Managing access
While CSPs need to ensure their own access management and controls that limit access to cloud tenant environments, tenants themselves must assume this can be compromised and focus on learning the who, what, when and where of access management.

Properly assigning user access rights helps by reducing instances of shared credentials so cloud tenants can focus on how those credentials are used. Resource access policies can also reduce opportunities for movement between the CSP infrastructure and cloud tenants.

Detect and respond
When it comes to cloud and on-premises monitoring, it is necessary to monitor both as well as determine how to correlate data and context from both into actionable information for security analysts. Monitoring cloud-deployed resources by cloud tenants is essential to increase the ability to detect lateral movement from the CSP infrastructure to tenant environments and vice versa. Coordinating with the CSP – as well as CSP coordination with cloud tenants – can provide a powerful combination of information that can increase the likelihood of detecting the post-compromise activities.

More importantly, visibility into attacker behaviors is dependent on the implementation of proper tools that can leverage cloud-specific data.

Security operations
Knowing and managing the infrastructure as a part of due diligence should help to identify systems and operations that are compromised by malware implants like those used in Operation Cloud Hopper.

Changes to production systems can be difficult to detect. But when visibility is available in the cloud infrastructure, it is much easier to detect attacker behaviors in compromised systems and services that are clearly operating outside of expected specifications. Ideally, security operations teams will have solid information about expectations for that infrastructure, so deviations from normal activity are more likely to identify malware and its activity.

"Vectra gives us the information we need to make fast, informed decisions""Cognito enables us to focus on threats that pose the highest risk""Cognito was like a miracle dropped into our laps. The quality and priority of threat information is excellent"