Guarding individual computer systems and organizational networks from the effects of malicious software or the intrusion of unauthorized users and applications begins with solid perimeter and endpoint defenses, and an effective method of access control.
Though opinions differ as to which is best, two approaches dominate in the bid to restrict and regulate access to vital system and network resources and infrastructure. In this article, we will analyze Blacklisting vs Whitelisting and the differences and benefits of each.
What is Blacklisting?
Just like a database of known and suspected terrorists maintained by border control authorities, or the roster of card counters and loaded dice switchers barring certain customers from your local casino, a computer blacklist details known malicious or suspicious entities that shouldn’t be allowed access or running (execution) rights, in a system or network.
These “entities” would typically include malicious software such as viruses, Trojans, worms, spyware, keyloggers, and other forms of malware. But depending on the environment and the scope of application, blacklisted entities might extend to include users, business applications, processes, IP addresses, and organizations known to pose a threat to an enterprise or individual.
Blacklisting has been traditionally deployed as a key element in anti-virus and security software suites, typically in the form of a “virus database” of known digital signatures, heuristics or behavior characteristics associated with viruses and malware that have been identified in the wild.
Note the emphasis on “known” threats. Virus signatures and other forms of blacklisting rely on security intelligence and experience of attack vectors, exploits, vulnerabilities, and malware currently doing the rounds – and for which counter-measures are already known or developed. Against unknown menaces like zero-day threats (which have yet to be discovered and isolated by security professionals), blacklisting is of very limited or no value.
But limitations aside, blacklisting has been a popular strategy for years, and still remains an active option for modern enterprise security.
Advantages of Blacklisting
One of the principal advantages of blacklisting lies in the simplicity of its principle: You identify everything bad that you don’t want getting into or operating on your system, exclude it from access, then allow the free flow of everything else. It has been and continues to be the basis on which signature-based anti-virus and anti-malware software operates.
For users, it’s traditionally been a low-maintenance option, as responsibility for compiling and updating a blacklist of applications or entities falls to the software itself and its related databases, or to some form of third-party threat intelligence/service provider.
It’s a threat-centered approach whose effectiveness depends on how well and how often the blacklist and its associated responses are refreshed and updated – which all depend in turn on the volume of threats a system has to deal with. Given that an estimated 2 million new pieces of malware are emerging each month, keeping a blacklist updated now calls upon the gathering of threat intelligence from millions of devices and endpoints, using cloud-based services.
What is Whitelisting?
Application whitelisting turns the blacklist logic on its head: You draw up a list of acceptable entities (software applications, email addresses, users, processes, devices, etc.) that are allowed access to a system or network, and block everything else. It’s based on a “zero trust” principle which essentially denies all, and allows only what’s necessary.
The simplest whitelisting techniques used for systems and networks identify applications based on their file name, size, and directory paths. But the U.S. National Institute of Standards and Technology or NIST, a division of the Commerce Department, recommends a stricter approach, with a combination of cryptographic hash techniques and digital signatures linked to the manufacturer or developer of each component or piece of software.
At the network level, compiling a whitelist begins by constructing a detailed view of all the tasks that users need to perform, and the applications or processes they need, to perform them. The whitelist might include network infrastructure, sites and locations, all valid applications, authorized users, trusted partners, contractors, services, and ports. Finer-grained details may drill down to the level of application dependencies and software libraries (DLLs, etc.), plugins, extensions, and configuration files.
Whitelisting for user-level applications could include email (filtering for spam and unapproved contacts), programs and files, and approved commercial or non-commercial organizations registered with Internet Service Providers (ISPs).
In all cases, whitelists must be kept up to date, and administrators must give consideration both to user activity (e.g., what applications they’re allowed to install or run) and user privileges (i.e., making sure that users aren’t granted inappropriate combinations of access rights).
Third-party whitelisting services exist and are sometimes employed by enterprises seeking to ease the management burden that’s associated with the process. These services are often reputation-based, using technology to give ratings to software and network processes based on their age, digital signatures, and rate of occurrence.
Blacklisting vs Whitelisting – Benefits of Whitelisting
From a security perspective, it’s easier (and in many ways, makes more sense) to put a blanket ban on everything, and just let in the chosen few. If only authorized users are allowed access to a network or its resources, the chances of malicious intrusion are drastically reduced. And if only approved software and applications are allowed to run, the chances of malware gaining a grip on the system are likewise minimized.
In fact, NIST recommends the use of whitelisting in high-risk security environments, where the integrity of individual or connected systems is critical and takes precedence over any restrictions that users might suffer in their choice or access to software.
Whitelisting is also a valued option in corporate or industrial environments where working conditions and transactions may be subject to strict regulatory compliance regimes. Strict controls on access and execution are possible in environments where standards and policies need to be periodically reviewed for audit or compliance purposes.
Blacklisting vs Whitelisting – Which is Better?
Given the fact that blacklists are restricted to known variables (documented malware, etc.), and that malware variants are continually being designed to evade behavior or signature-based modes of detection, there’s a feeling in many circles that whitelisting represents the more sensible approach to information security.
This is despite the time, effort, and resources which must be spent in compiling, monitoring, and updating whitelists at enterprise level – and the need to guard against efforts by cybercriminals to compromise existing whitelisted applications (which would still have the go-ahead to run) or to design applications or network entities that have identical file names and sizes to approved ones.
As always in such debates, there are also those who favor a “best of both worlds” scenario, with a blacklisted approach to security software for malware and intrusion detection and eradication, operating in tandem with a whitelisted policy governing access to the system or network as a whole.
Share this Post