Zero Day Vulnerabilities: How to Overcome the Fear of the Unknown

essidsolutions

Are undiscovered vulnerabilities being exploited in your network at this moment? Because the unthinkable could happen to anyone, anytime, Mark Kedgley, CTO at Netwrix, emphasizes the need to put effective measures in place to spot indicators of compromise, block attacks and respond to incidents.

When a new Zero Day threat is revealed, it’s tempting to try to work out when Day Zero actually was. The first news the general public receives is the release of the patch, but that means somebody fell victim to the exploit days or even weeks earlier. Sometimes we get more insight from whichever researcher first analyzed the threat. 

But the real concern is not when Day Zero was but what was happening before Day Zero. How long had adversaries been exploiting the vulnerability before it was discovered? This troubling ‘known unknown’ leads right to another vexing question: How many yet-undiscovered vulnerabilities are being exploited in our network right now? It’s rather like swimming in the ocean knowing there are great white sharks out of sight below us — it won’t be long before one of them gets hungry! 

The Danger Is Real

Think that it won’t happen to you? Think again. There will always be an endless supply of traditional ‘exploit-that-we-only-just-discovered’ vulnerabilities, as built-in functions get misused and abused when the wrong people get their hands on them. Moreover, Solarigate and HafniumOpens a new window showed us that a deliberately engineered Zero Day threat can be unleashed at any time.

Think of the Death Star in Star Wars. It was formidably defended, with not just laser guns but squadrons of tie fighters to intercept and destroy attackers before they even got close, and backed up with forcefields for protection, just in case! And yet the infamous thermal exhaust port presented the rebels with a critical weak spot — a Zero Day vulnerability that was exploited to devastating effect.

Will a Typical Security Posture Help?

Trying to defend against future threats that have yet to take shape presents a particularly challenging problem. To address it most effectively, cybersecurity experts recommend a comprehensive, layered approach to security, based on a security control framework (CSF) like the NIST CSF. 

The great thing about a security control framework is that it gives you strategies for mitigating, counteracting and remediating a broad spectrum of cyber threats. It helps you increase the cyber resilience of your IT environment, even in the face of Zero Day exploits, which by definition have not been seen before. 

Indeed, early detection and containment are critical for limiting the depth of an incursion and the opportunity for data theft or disruption. ResearchOpens a new window shows that the detection time for a breach is around 160 days, but data is usually exfiltrated within the first few days. In other words, you could be the victim of a smash-and-grab data theft months before you have any idea that your systems were compromised.

See More: A Big Threat for SMBs: Why Cybersecurity is Everyone’s Responsibility

Change Control Is Vital

Accordingly, to deal with the unknown, it is essential to spot malicious activity as soon as possible. The key to success is gaining visibility into and control over changes in your IT systems. 

What changes are important to track and control in order to detect unknown threats? We must be able to expose what are known as indicators of compromise (IoCs). For example, IoCs for ransomware, APTs or trojan malware include new or modified files in our systems, changes to registry settings, the launching of new processes, and the opening of new network ports.

But visibility into changes isn’t sufficient; we also need control over them. IT systems experience a huge volume of changes every day, and most of them are not IoCs. For example, applying valid patches to a system will result in the same kind of changes as we would get with a cyberattack. Accordingly, we need to be able to distinguish between intended, positive changes and unexpected, unwanted changes.

The only way to get the necessary level of forensic detail and context to distinguish between these two types of changes is with file integrity monitoring (FIM). It is not enough to just know that, say, a system file has changed on a device. We need to know when it changed, its state before and after the change, and who made the change. Moreover, was this a planned change that is related to an approved request in our ITSM system? Ideally, we would also reference file reputation data to determine, for example, whether the file is signed and known to be part of an official publisher patch, as well as whether this same file has been seen on other systems.

Of course, change control delivers more value than only breach detection. It also helps us maintain a secure state that blocks breaches from happening in the first place, by identifying configuration drift so it can be remediated before it can be exploited. 

A Word About Patches

Vulnerabilities arise either from bad configuration choices or from unforeseen flaws within software products. The attacker just needs to know how to exploit the vulnerability whether it is known or yet unknown to a larger audience. Patch management is a big topic on its own. When we get into large-scale, mixed-platform and technology estates, then both the way we assess the need for patches and the way we deploy them become major IT processes. 

Many organizations still run traditional, network-based scans of all systems, hitting the endpoints with a long, automated series of tests to probe for vulnerabilities. This takes the form of first enumerating a software inventory or version catalogue and then simulating known exploit methods.

With close to 200,000 known vulnerabilities catalogued by the National Vulnerability Database, and new ones coming every hour of every day, comprehensively identifying the need for patches has become a resource and time-intensive operation. In response, the latest generation of patch assessment technologies are seeking to make better use of passive discovery methods in order to lighten the scanning load, a.k.a. the ‘scan-less’ scan. With a scan-less scan, the workload of the scanning discovery is reduced by instead maintaining a real-time inventory of installed software and versions, negating the need to re-discover this every time a scan runs. 

If we can keep an accurate inventory of products and versions, we can then immediately flag the existence of new vulnerabilities, whether arising from a new software deployment on an endpoint, or by a newly discovered exploit becoming known for pre-existing installed software. This allows for a more targeted patching approach – patch only as required – and a more real-time ability to stay one step ahead of new exploits as soon as they become known.

Conclusion

Accepting that new threats are inevitable and that the unthinkable could happen to us is uncomfortable, but it also motivates us to take action to put in place the additional measures we need to effectively block attacks, spot IoCs and respond to incidents. By implementing FIM, you can be ready for threats — even those that don’t yet exist.

How are you preparing to handle future threats and vulnerabilities? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

Image Source: Shutterstock

MORE ON SECURITY VULNERABILITIES:

Image Source: Shutterstock