Penetration Test Remediation Approach
Penetration Test Remediation Approach
Penetration tests are a fact of life for most of us who fund themselves forever cursed to work in the field if IT and they are, generally speaking, a very good thing as they allow you to verify that you are following industry best practice. However, the results from the test, and consequent list of corrective actions, are often less than a model of clarity.
From my experience, there is no relationship between the cost of your penetration test and the quality of the report and the task of correcting the identified issues quickly becomes laborious and, consequently, is often really completed. Note also that, unlike a car MOT test, a retest after a penetration test usually has a substantial cost. Incidentally, while there is often a requirement to have an annual penetration test, - usually to pass some sort of security certification - but there is often no need to actually pass it. It’s an interesting world.
In the ideal world, the output from a penetration test would be a list of named network objects with the security vulnerabilities and the necessary patches identified. Unfortunately, that seems to be too much work for many providers of penetration tests – there is, after all, no established standard for what constitutes a penetration test – and what you can get is a long and ungrouped list of IP addresses and an even longer list of missing patches that may or may not actually be missing.
So, an organised approach is needed to sort out the list of doom that you paid so much money for into something that can be turned into a plan.
- For servers and workstations especially, check for superseded patches. In one recent penetration test result report that I viewed, it turned out that almost 20% of the missing patches had actually been superseded. We aren’t talking recent stuff like the KB4034658 debacle but patches that were superseded five years ago.
- For network devices that are shown to have vulnerabilities, check that the recommendations regarding firmware upgrades are actually valid. It is perfectly possible for the test report to recommend upgrading, say, version 11.1 to version 11.3 when the identified vulnerability re-occurs in version 11.4. As a rule of thumb, go for the latest version and thus avoid having to obtain emergency downtime twice in as many days.
- Track down everything that has an automated fix and group them together as the chances are that everything can be done with one reboot. If, for example, a number of outdated encryption algorithms are found on a group of Windows servers then the fix will likely be a registry setting (or ten) and they can all be applied at once using group policies.
- If you don’t have a test environment, then you will need a sacrificial lamb. This is, for obvious reasons of self-preservation, never ever the HR or Payroll departments so take out your frustrations upon PR or Marketing as they will likely be on a coffee break for most of the working week.
- Verify that things that should be working such as AV signature updates and security patching are actually working as your WSUS server might simply be out of disk space and that is why the last two months patches were missing. Don’t get me started on network monitoring…
- Identify those devices that will never be capable of meeting modern day security standards. A lot of organisation have some antique servers that were virtualised because no one could figure out how to migrate some application to somewhere else. In such cases, all you can really do in way of mitigation is to segment these devices is to separate them into their own network and protect them with access control lists. It would also be possible to use some sort of application proxy, like that big expensive load balancer you bought in the last financial year, to stop any direct contact with what would otherwise be an easily compromised server or device.
Once you have figured out what actually needs to be fixed then you can create a plan so that a schedule and all the necessary change requests. By far the longest part of the process is figuring out what to do and the deployment plan is usually straightforward and, to drag something out from the forever dull land of project management, remember that if you fail to plan then you have planned to fail. True.