The blessing and curse of PCI-DSS

This is a post I have been meaning to write for some while, as I have been pondering the benefits vs. challenges of various standards / legislation.  I’m not thinking about challenges of implementing PCI-DSS (Payment Card Industry – Digital Security Standard), more the challenges of working in environments where compliance trumps security.  As per the title, this post will focus on PCI-DSS, but I think it’s likely most of the issues will apply to various standards / regulations that are subject to compliance audits of some sort.

On the positive (blessing) side PCI-DSS is mostly a good standard, enforcing things like encryption in transit over public networks, separation of duties, minimising access to card data etc.  It has forced some level of security practice onto companies that may previously have had relatively lax controls in place.  The standard has also considerably raised the profile of security / meeting security requirements within many organisations.

On the negative (curse) side PCI-DSS is seen by many organisations as the be all and end all of security, despite the fact that is it the bare minimum you have to achieve in order to be permitted to handle / process card date.  In addition it focuses almost solely on card data, ignoring concerns around things like PII (Personally Identifiable Information).  This leads to a focus on ‘box-ticking’ compliance, rather than designing secure systems from the ground up which would by definition be compliant with most (any?) sensible standards.

With the movement towards a more continuos monitoring style proposed for the latest release of PCI-DSS the focus on obtaining compliance yearly may be something we are moving away from.  However this will do little to address companies attitudes towards broader security and the belief that obtaining and maintaining PCI-DSS compliance means systems are completely secure.

On balance I think standards / regulations like PCI-DSS are a good thing as they force companies to at least achieve some minimal levels of security.  The challenge for security professionals is to get project teams and the wider business to accept that these standards are the bare minimums.  Considerably more secure designs / solutions need to be implemented if we want to actually meet our duty of care to our customers whose data we hold and process.

What are your thoughts?

How successful have you been in moving to security being ‘front and centre’ and compliance with regulations being a by product of this, rather than the focus being on compliance rather than security?




13 Security Myths Busted.. My thoughts.

I was recently sent a link to an article covering what were described as ’13 security myths – busted’ and asked my opinion.  As it was a fairly light and interesting I thought I would share the article and my thoughts;

The original article can be found here;

Have a read of the myths and why they thin they are myths, read my thoughts below, and it would be great to hear your thoughts.

1. AV – Possibly not super efficient, but I think still necessary – they kind of mix apples and oranges with the targeted attack comment, as it is not designed for that, but it still prevents the vast majority of malware, and general attacks.  Possibly and an environment where literally no one runs with admin privileges and there is strong white listing you could do without AV, but generally I’d say it is still relevant and required.

2. This one is hard to know as there is so much FUD around.  It is clear that in many circumstances (stuxnet etc, Chinese APT , US government espionage etc.) that governments are investing huge sums of money and employing extremely bright people to attack and defend in cyber land.  I suspect much will never be known as the NSA / Mi6 / <insert secret government money pit here> are by definition very secretive.  Remember all the speculation around the NSAs ability to crack encryption in the past..

3. Totally agree – just look at most businesses and the trouble they have getting control of authentication via AD / IAM.  However, many are moving in the right direction though so maybe soon we’ll have everything in IAM and / or AD..

4. I think this one proves itself incorrect in the text – Risk management is needed, you just need to work on understanding your adversaries and the actual risks you face, which includes understanding their motivations and the value they place on your data and IP.

5. This I totally agree with.  I have already highlighted I don’t really like the fact we as an industry use the term ‘best practice’ all over our standards and policy documents etc – who defines what it is? Is it best in any specific environment with it’s support skill sets and technology stack etc?

6. Half agree they are a fact of life, however you can have effective responses and strategies around privilege control and application controls etc. to massively mitigate the risks these pose.

7. I can’t comment on this one, but most national infrastructures are inadequately protected and tend to rely on old legacy systems for many of their functions so this is probably try in the UK for much supporting infrastructure as well.

8. Completely agree with this.  Compliance is a useful checklist, but compliance with standards should be a by product of good secure design and processes, not something we strive for as a product in itself.  If provides a driver but is very much the wrong focus if you want to be secure rather than compliant.

9. Agree – CISO may own security policy and strategy etc., but security is everyone’s problem and everyone should be accountable for performing their duties with security and security policies in mind.  I’m a big fan of security awareness training as a regular thing to help educate people and keep security at the forefront of the way we do business.

10. Likely has been true, in the same way as Mac / Linux are ‘safer’ than Windows, as it has not been the focus of as much malicious attention and has not been carrying as much functionality and valuable data.  This is rapidly shifting though as we rely more and more on mobile devices for everything from banking to shopping to actual business.  So I think this one is rapidly if not already becoming a myth.

11. Agree – you can likely never be 100% secure if you want to have a life or business online.  I think it was an American who coined ‘eternal vigilance is the price of freedom’  we should work to be secure, but freedom both individually and as a business is too important and hard won to give up.  Obviously some personal freedoms to do whatever you want with corporate devices have to be given up, but I think my point stands as a general concept.  As the guy in the article says (and I do above) work to understand your adversaries, their motivations and tools.

12. Agree with this one also – continuous monitoring, trending and learning are key to understanding and preventing or at least capturing todays advanced long term threats such as APTs.

13. I agree with this final one as well, and have actually blogged about this before.  We live in an ‘assume you have or will be breached’ world.  Put the detective measures and controls in place to ensure you rapidly detect and minimise the damage from any breach.  Read last years Verizon data breach report..

It would be great to hear your thoughts on this light article.


Exploit vulnerabilities rather than just report on ‘hypothetical’ issues

While doing some general reading recently I came across an article entitled “Why aren’t you using Metasploit to expose Windows vulnerabilities?”.  This reminded me of something I have discussed with people a few times, the benefits of actually proving and demonstrating how vulnerabilities can be exploited rather than just relying on metrics from scanners..

Don’t get me wrong, the use of vulnerability / patch scanners are incredibly useful for providing an overall view of the status of an environment;

– Are patches being deployed consistently across the environment in a timely manner?

– Are rules around password complexity, who is in the administrators group, machines and users are located in the correct places in the LDAP database etc. being obeyed?

– Are software and O/S versions and types in line with the requirements / tech stack?

– etc..

The output from these scanners is also useful and extensively used in providing compliance / regulatory type report data confirming that an environment is ‘correctly’ maintained.

What these scans fall short in two main areas;

1. They do not provide a real picture of the actual risk any of the identified vulnerabilities pose to your organisation in your configuration with your polices and rules applied.

2. Due to point 1 they may either not create enough realisation of the risks for senior management to put enough priority / emphasis on remediating them, or they may cause far too much fear due to the many vulnerabilities identified that may or may not be exploitable.

In order to provide a quantitate demonstration of how easy (or difficult) it is to exploit identified vulnerabilities, and also demonstrate to management how these reported vulnerabilities actually be exploited, using tools such as Core Impact, Canvas or Metasploit in addition to just scanning for vulnerabilities is key.

Tools like Canvas and Core Impact are commercial offerings with relatively high price tags, Metasploit is however open source and free to use in both Windows and *nix environments. It even has a gui!  So there is no excuse for not actually testing some key vulnerabilities identified by your scans, then demonstrating the results to senior management and even other IT staff to increase awareness.

Metasploit can be found here;

Where it can be downloaded for free.  Should you wish to contribute to it’s success there are also paid for versions.

The key message here is don’t stop using the standard patch / vulnerability scans as these are key to providing a picture of the entire environment and providing assurance of compliance to policies.  However these should be supplemented with actually exploiting some key vulnerabilities to provide evidence of the actual risk in you environment rather than just the usual ‘arbitrary code execution’ or similar statement related to the potential vulnerability.  This will put much more weight behind the your arguments for improving security.