When it comes to cyber security, the old adage of ‘doing the simple things well’ is more relevant today than ever before. The three simple principles in cyber security that will help you build a strong foundation and prevent future crises are:
These principles have been around for decades but they hold true now more than ever because we live in an increasingly cloud-orientated environment where we need to be vigilant at all times.
Least privilege is a fundamental principle of information security. The less privileges a user has, the fewer opportunities there are for an attacker to exploit. When designing your systems, always ask yourself "what is the minimum amount of access required for this user to perform their job?" Restricting users (or applications) to the absolute minimum they need to do their job reduces the risk of accidental or malicious damage, and also minimises the impact of a successful attack.
Attack surface minimisation is the practice of reducing the attack surface area as much as possible. This can be done by removing unnecessary features and components from systems, hardening systems against attacks, and using least privilege principles from an application or service perspective.
Defence in depth is often overlooked, but it is a very risky strategy to rely solely on a single security mechanism. Attackers are very creative and will often find ways to circumvent a security measure if it is their only target. By using multiple layers of security, you create obstacles that the attacker must overcome, which makes it more difficult, time-consuming, and expensive for them to achieve their goals.
These three principles help form the foundation of good cyber security practices and following them will go a long way to help you protect your organisation from cyber threats. Implementing these principles may not always be easy, but it is well worth the effort in order to keep your critical assets and data safe.
The recent Log4Shell zero day vulnerability in Apache Log4J2 really demonstrated the futility of the ongoing cycle of keeping pace with vulnerabilities being found in software, said vulnerabilities being compromised, and organisations lurching into crisis mode seeking to mitigate and patch the vulnerable software. This is a never-ending reactive cycle and you are always going to be behind the curve. It's striking when looking closely at this and other major vulnerabilities in recent years that simply adhering to long established overarching security principles will normally provide mitigation in itself, allowing the vulnerable organisation to patch at their leisure as part of a standard operational cycle rather than in a major crisis. Clearly there will always be exceptions to the rule, but 95% of the time the approach generally provides a firm foundation for an organisation allowing for far greater breach resilience. The fundamental principles of least privilege, attack surface minimisation and defence in depth should always be a key consideration when designing an organisational security strategy.
The main element that struck me when reviewing the log4shell exploit was the requirement for the exploited vulnerable asset to have the ability to download the malicious code directly from the Internet. Going back to the overarching security principles, and the attack surface minimisation and least privilege in particular, you really have to question why the server and application in question would be allowed to download any content, or communicate outbound on any protocol direct to the entire Internet? Surely this functionality never formed part of the functional design requirements for the service in question? So why is it there?
I fear that in our modern world there is now a default assumption that whatever the technology asset it simply has to be able to access the Internet. Whilst this is appropriate for home networks, where the ability for a device to update its software regularly protects the home, it isn't appropriate for a business enterprise network. Despite this, many businesses continue to operate this way which drastically increases the likelihood of them being compromised via a vulnerability like log4shell or other traditional attack vector.
There is a paradox in that access to the Internet for staff web browsing in most businesses is tightly controlled and closely monitored by web proxies / content filtering / malware detection devices, but quite often within the data centre or cloud environments (which usually contain the most valuable and critical assets of the organisation), servers enjoy more or less unfettered (and unnecessary) access to the Internet. For any organisation that is serious about securing their critical assets, it's really essential that server Internet access is appropriately restricted to only required functionality for the role the server performs (e.g firewall - source / destination / protocol). It should go without saying that "web browsing" via a proxy or otherwise should never be allowed from a business critical server!
Accepting that attack surface minimisation and least privilege are a "work in progress" for many organisations, defence in depth really does become more critical.
In my own experience it's rare that any technology or security team has 100% confidence in their knowledge of the business environment. CMDB's, and asset/software inventories are rarely complete, and governance failures over the years often lead to orphaned assets or assets being introduced without the knowledge and oversight of the security teams.
As these assets are effectively "unknowns" normal practices around the installation of host security controls (AV, Endpoint Detection and Response - EDR etc.), hardening, and monitoring will be absent, and it's an absolute necessity to be able to monitor the business and cloud network environments holistically for threats using a capable Network Detection and Response (NDR) system. This not only allows you to begin to spot the assets that you may have been previously aware of (driving better accuracy of your CMDB), it also allows you to begin to detect and visualise attack scenarios across your organisation's disparate network attached assets and associated user context. Ultimately it provides the depth to your defences, so when the inevitable happens and an attacker does manage to penetrate your defences, you are in a much better place to contain and eradicate the threat at pace!
Unfortunately in security things are rarely clear cut; there really is no way to ensure that an organisation is 100% secure, but by thinking and acting in terms of the principles outlined and taking full control of the part of the risk management equation that you can control, you can be very confident that you’ll rarely be critically exposed and crises should be few and far between even in the face of the latest ‘zero day’!