As promised, this follow up post will outline what I mean by each of the ‘resolutions’ I highlighted.
- Patch. Everything. On time.
- Protect your hosts. Do application whitelisting.
- No admin rights for anyone who can access production data.
- No one with admin rights can access data.
- Role Based Access.
- Segregate your networks.
- If you create code, do solid code assurance.
- Test and Audit.
1. Patch. Everything. On time.
Sounds simple right? It should be, but it seems it isn’t in many companies. From my experience there seem to be 2 main drivers for so many companies failing this most basic of maintenance tasks;
- Systems that must have almost 100% uptime, with no, or ill defined patching windows and processes. This goes hand in hand with these solutions being incorrectly designed, if a system must always be ‘up’, design it in such a way that components can be taken out of service to be patched and maintained (or indeed if they fail).
- Incorrect ownership and drivers for the patching process. In many organisations it seems to be the security team who drive the need to apply ‘security’ patches. This needs to be turned around. Any system in production must be patched and maintained as part of BAU. Systems / solutions should never be handed over into production without clear ownership and agreed processes for maintaining them, this must include patching. Security then become an assurance function for this and their scans / checks confirm that the process is being correctly followed, plus of course highlighting any gaps.
If you see these issues in your organisation, make 2016 the year you address them, don’t be the next business in the headlines that is hacked due to systems that have not been patched for months!
2. Protect your hosts. Do application whitelisting.
With the ever more porous nature of our networks and perimeters, coupled with the insider threat and phishing etc. protecting our hosts is becoming ever more critical.
AV (Anti Virus / Malware) is not dead, but it also clearly is not enough on it’s own. Indeed you will struggle to find any host protection product that only does AV these days. Ensure all your hosts, both servers and user end points are running a solid, up to date and centrally managed host protection solution (or solutions) providing anti malware, host IPS (Intrusion Prevention System), host fire-walling and ideally FIM (File Integrity Monitoring).
I’m gradually trying to change peoples language from AV / Anti Malware to Host Protection as I think this covers both the requirement, and many of the solutions far better.
In addition to this I would strongly recommend the use of an application whitelisting solution, as this can provide a key defence in preventing any unapproved (or malicious) software from running. As well as preventing malware, these solutions have the added benefit of helping to maintain a known environment, running only known and approved software.
3. No admin rights for anyone who can access production data. No one with admin rights can access data.
This is something I am currently championing as a great way to reduce the risk to your organisations data.
This may be harder for very small organisations, but for medium and larger ones, think about the different roles your teams have.
How many people who need to access key data, e.g. via production applications, need to have administrative rights on their end user systems, or on the production systems?
Conversely, how many of the system administrators who maintain systems and databases etc. need access to the actual production data in order to perform their duties?
One of the most common ways malware gets a hold is via users with administrative privileges. So if we prevent any user with these elevated privileges from having access to data, if they or their systems are compromised, the risk of data loss or of damage to data integrity is massively reduced.
While it may seem a substantial challenge to prevent administrators from having access to data, there are at least a couple of obvious options.
Some host protection solutions claim to have separation of duties capabilities that control who can access data outside of just relying on O/S (Operating System) permissions. I have not tested these though.
Various companies offer transparent encryption solutions that have their own set of ACLs managed independently from the O/S permissions. These can be managed by for example the security team to ensure only approved business users can access data, while still permitting administrators to perform their role.
4. Role Based Access.
This one should hopefully require minimal explanation. Each type of user should have a defined role. This should have associated system permissions allowing them to access data and perform the tasks required to perform their role.
This ensures people should only be able to access data they are supposed to, and not data they should not. The principle of ‘least privilege’ must be adhered to when creating roles and applying permissions to ensure everyone can perform their duties, but not carry out tasks outside of those that are approved.
This can be backed up by using some form of IAM (Identity and Access Management) solution. Although be careful about over complicating this if your organisation is not large enough and complex enough to warrant a cumbersome IAM solution.
5. Segregate your networks.
In addition to external firewalls preventing access from outside your organisation, internal networks must be segregated as well.
When designing your networks, think carefully about which systems need to to talk to each other, and on which ports.
For example, do your end user systems all need to access all of the production environments? Or do some of your teams need access to some production systems and only on specific application ports?
This point can be linked with the host protection one above as host firewalls can be used to further prevent unauthorised access to systems. Most servers do not need to connect to all other servers in the same zone as them. Host firewalls can be used to restrict servers from connecting to other servers they do not need to, without requiring an overly complex network design.
Strong network and system segregation will help prevent the spread of any malware or malicious users within the organisations’ IT estate, and thus help ensure data is not removed or changed.
6. If you create code, do solid code assurance.
The OWASP top 10 has changed little for several years (look it up if you are not familiar). Applications consistently have known and well understood vulnerabilities. These same vulnerabilities are consistently exploited by malicious people.
If you create applications ensure the code goes through rigorous manual and automated code reviews. Ensure the application is thoroughly tested against not just the businesses functional requirements, but also the non functional requirements from the security team.
Finally, before the application or substantial change goes live ensure it is penetration / security tested by experts.
Performing all these checks does not guarantee your application cannot be hacked, but it will ensure that it is not an easy target. Ideally these steps should be key and non negotiable parts of your organisations SDLC (Software / System Development Life Cycle).
7. Test and Audit.
Once you have the basics in place, you need to ensure they are being successfully applied. This is where the assurance part of the security teams role comes into play. Whether it is supporting the SDLC processes or scanning systems for outstanding patches, the security team can, and must, assure that the agreed tasks and processes are being adhered to.
This step is critical to the ongoing success of the previous items, the effort and expertise required to complete it should not be under estimated.
Hopefully this has supplied some clarity and context to my previous post and made my intent clear. Let me know.
In some following posts I’ll start talking about some of the really fun and intelligent things you can start doing once the basics are in place!