Cloud Security Alliance; Security Guidance v3 released

The Cloud Security Alliance (CSA) has released the long awaited version 3 of the ‘Security Guidance for Critical Areas of Focus in Cloud Computing’.  This is the first update to the guidance since version 2.1 was released in 2009 and is a major overhaul bringing the guidance up to date in the new and fast moving world that is ‘cloud’ computing.

In addition to updating all of the existing domains of the guidance, there has been the addition of Domain 14 – Security as a Service (SecaaS), this is the domain I have contributed extensively to and has it’s basis in the white paper I co-chaired the publication or a few months ago.

As an overview version 3 comprises of the following domains in the context of cloud security;

Section I. Cloud Architecture

–          Domain 1: Cloud Computing Architectural Framework

Section II. Governing in the Cloud

–          Domain 2: Governance and Enterprise Risk Management

–          Domain 3: Legal Issues: Contracts and Electronic Discovery

–          Domain 4: Compliance and Audit Management

–          Domain 5: Information Management and Data Security

–          Domain 6: Interoperability and Portability

Section III. Operating in the Cloud

–          Domain 7: Traditional Security, Business Continuity, and Disaster Recovery

–          Domain 8: Data Centre Operations

–          Domain 9: Incident Response

–          Domain 10: Application Security

–          Domain 11: Encryption and Key Management

–          Domain 12: Identity, Entitlement, and Access Management

–          Domain 13: Virtualization

–          Domain 14: Security as a Service

The guidance can be freely downloaded from the CSA website here;

https://cloudsecurityalliance.org/research/initiatives/security-guidance/

It is relatively long, but covers a lot of what you need to know about cloud security and things you need to consider if you are planning to move your data to a ‘cloud’ type service.

K

Homomorphic Encryption – Saviour of the cloud? Ready for prime time?

Homomorphic encryption has been around for a while (in fact it has been debated for around 30 years), but most systems that are Homomorphic are only partially homomorphic thus limiting their use in enabling real world distributed, including cloud based, systems.

I’ll start by briefly describing what the term homomorphic means when used to describe a cryptosystem.  If a mathematical operation can be performed on the encrypted data to produce an encrypted output that when decrypted gives the same result as if the operation had been performed on the plaintext.

I’m sure you can see how this removes one of the main barriers to the adoption of cloud computing.  If an efficient, proven and thoroughly tested homomorphic encryption system would potentially revolutionise the view of cloud computing security.  Currently it is easy to send data to and from the cloud in a secure encrypted manner, however if any computation is to be carried out in this data it has to be unencrypted at some point.  When the data is unencrypted in the cloud the risk that employees of the cloud provider, and potentially other customers, could access the data becomes a real concern.  It is this risk that is one of the key road blocks to companies moving their data to the cloud.

Additionally some legal / regulatory rules prevent certain unencrypted data types, such as personally identifiable information (PII), leaving countries / regions such as the EU.  A system that enabled data to remain encrypted would potentially get around these regulatory issues and allow data to be housed in the cloud (many cloud providers have data centres located in various global locations and can’t guarantee where data will reside.   In fact this is one of the benefits of the cloud – the high level of redundancy and resilience provided by multiple data centres in geographically diverse locations).

Some existing algorithms are partially homomorphic, this means that they are homomorphic with regards to one or maybe a couple of operations.  For example the RSA algorithm is homomorphic with regards to multiplication.

IBM has published some research in this area in 2009 they proposed fully homomorphic systems that are linked to from here;

http://domino.research.ibm.com/comm/research_projects.nsf/pages/security.homoenc.html

Currently fully homomorphic systems are too new and not yet practical enough to be implemented for production systems.  For any cryptographic algorithm to be recommended it requires considerably more time to be peer reviewed and tested by security and encryption researchers to allow a reasonable level of assurance that there are not attacks that could be used to unencrypt the data.  In terms of practicality currently proposed homomorphic encryption systems, the complexity of the system grows enormously as the number of actions you need to perform on the encrypted data increases.  This leads to a massive increase in the computational power required to run the system, this is a non-trivial increase that will not be solved by Moore’s law anytime in the near future.

So homomorphic encryption has now been proven to be possible which is a huge step forwards, and the work done by people like Craig Gentry and the guys at IBM and MIT must be hugely applauded.

Microsoft researchers published a paper in May of this year (2011) titled ‘Can Homomorphic Encyption be Practical’ that can be found here;

http://research.microsoft.com/apps/pubs/default.aspx?id=148825

This provides an overview of a proposed partially homomorphic implementation along with thoughts on how it could be made fully homomorphic and how the efficiency could be improved.  The page also contains some useful links to cloud and lattice based cryptography.

However the reality is that we need several more years for a broader range of cryptographers to examine the cryptosystem to be assured it is secure, and for further work to go into making the system much more efficient.

These are definitely interesting times, and over the next few years I would hope to see homomorphic cryptosystems removing some of today’s key barriers to the adoption of cloud computing services!

K

USAF Predator control systems compromised by malware

Following on from the very high profile targeted attacks such as the Stuxnet worm that was used to target Siemens supervisory control and data acquisition (SCADA) systems such as those used in Iranian nuclear facilities;

http://www.google.co.uk/search?aq=f&gcx=c&sourceid=chrome&ie=UTF-8&q=stuxnet

 

 

and the RSA security breach that impacted many businesses earlier this year;

http://blogs.rsa.com/rivner/anatomy-of-an-attack/

It has emerged that some USAF (United States Air Force) computer systems have been infected by malware.

While the reports of this state that is it likely to just be a keylogger and not something that is co-opting control of armed military drones, this should be seen as yet another wake up call – any network attached systems or any systems that allow storage devices (e.g. USB drives) to be connected are vulnerable to attack by malware.  I am sure from reading the previous section you have realised that this means pretty much every computer system..

Details can be found here;

http://nakedsecurity.sophos.com/2011/10/10/malware-compromises-usaf-predator-drone-computer-systems/

One particularly worrying comment from the story is around the fact that they are not sure if the malware has been wiped from the systems properly and that it keeps coming back.  Best practice is always to do a clean rebuild of any infected machines, especially something as critical as this!

In short, if high profile security vendors and supposedly secure military computers can be successfully attacked and gaps exploited this should be a wake up call to anyone who does not yet take the security of their systems and data seriously.

Oh, and if in any doubt – reinstall, don’t keep trying to clean the malware from the system!

K

Security as a Service – Defined Categories of Service 2011 white paper published!

The first officially published work from the recently formed Cloud Security Aliance – Security as a Service (SecaaS) working group has been published.  This is a great first step as we have identified the key categories of service that can / will make up security as a service.

This document can be found here;

https://cloudsecurityalliance.org/wp-content/uploads/2011/09/SecaaS_V1_0.pdf

I’m personally very proud of this work as I am the co-chair for the Security as a Service working group which has meant bringing together input from multiple streams of expersts working from many global locations and different time zones.  I have also had to arbitrate any disagreements around content and ensure all experts who wanted to participate were able to provide their input.

In addition to the coordinating the various inputs and running steering meetings I also provided input into various categories where extra detail was required and also wrote most of one category that wasn’t picked up by the various experts who volunteered to help.

Our next steps are to to be finalised but they are likely to include;

– Finalising the version of the document that will be put forward towards an ISO standard

– Working on getting SecaaS added as the 14th domain of the official Cloud Security Alliance guidance

– Creating implementation guidance and examples for those looking to implement various SecaaS solutions

Watch this space and / or check in on the Cloud Security Alliance web site for progress updates.

K

NSA releases home network security best practices guide

The NSA has released an excellent guide titled ‘Best practices for keeping your home network secure’.  This covers the obvious things like

– securing your O/S (windows and Mac are covered) via patching and using current software etc.

– home network security via wireless encryption, strong passwords and DNS settings.

The guidance then goes further to cover areas including;

– Email best practices

– Social network site use

– Password management

– Travelling with mobile devices.

Overall while this is unlikely new information for those familiar with IT security, this is great guidance for those not working in this area.  I’d highly recommend you share this with your friends and family and help them understand the advice as it will improve their home and general on-line IT security / safety.

K

A Geeks Guide to Digital Forensics

I came across this presentation on the GoogleTechTalks YouTube channel – A Geeks Guide to Digital Forensics;

http://www.youtube.com/user/GoogleTechTalks#p/u/2/rPd-HiEvhhw

This is a very interesting talk and highlights some great open source tools for imaging hard drives and dealing with hard drive errors when imaging.  Also highlights the Sleuth Kit by Brian Carrier which is an excellent piece of open source software for analysing file systems that even has a graphical interface called Autopsy.

Several other tools and their uses are also discussed – check that talk out 🙂

K

Exploit vulnerabilities rather than just report on ‘hypothetical’ issues

While doing some general reading recently I came across an article entitled “Why aren’t you using Metasploit to expose Windows vulnerabilities?”.  This reminded me of something I have discussed with people a few times, the benefits of actually proving and demonstrating how vulnerabilities can be exploited rather than just relying on metrics from scanners..

Don’t get me wrong, the use of vulnerability / patch scanners are incredibly useful for providing an overall view of the status of an environment;

– Are patches being deployed consistently across the environment in a timely manner?

– Are rules around password complexity, who is in the administrators group, machines and users are located in the correct places in the LDAP database etc. being obeyed?

– Are software and O/S versions and types in line with the requirements / tech stack?

– etc..

The output from these scanners is also useful and extensively used in providing compliance / regulatory type report data confirming that an environment is ‘correctly’ maintained.

What these scans fall short in two main areas;

1. They do not provide a real picture of the actual risk any of the identified vulnerabilities pose to your organisation in your configuration with your polices and rules applied.

2. Due to point 1 they may either not create enough realisation of the risks for senior management to put enough priority / emphasis on remediating them, or they may cause far too much fear due to the many vulnerabilities identified that may or may not be exploitable.

In order to provide a quantitate demonstration of how easy (or difficult) it is to exploit identified vulnerabilities, and also demonstrate to management how these reported vulnerabilities actually be exploited, using tools such as Core Impact, Canvas or Metasploit in addition to just scanning for vulnerabilities is key.

Tools like Canvas and Core Impact are commercial offerings with relatively high price tags, Metasploit is however open source and free to use in both Windows and *nix environments. It even has a gui!  So there is no excuse for not actually testing some key vulnerabilities identified by your scans, then demonstrating the results to senior management and even other IT staff to increase awareness.

Metasploit can be found here;

http://www.metasploit.com/

Where it can be downloaded for free.  Should you wish to contribute to it’s success there are also paid for versions.

The key message here is don’t stop using the standard patch / vulnerability scans as these are key to providing a picture of the entire environment and providing assurance of compliance to policies.  However these should be supplemented with actually exploiting some key vulnerabilities to provide evidence of the actual risk in you environment rather than just the usual ‘arbitrary code execution’ or similar statement related to the potential vulnerability.  This will put much more weight behind the your arguments for improving security.

K

Simplicity

In preparation for doing the Enterprise and Solution Architecture course and exam from the Chartered Institute for IT I have started reading the book ‘Simple Architectures for Complex Enterprises’ by Roger Sessions from Microsoft press.

While this is primarily a book focused on solution and enterprise architecture, the main point it focuses on is the often overlooked one of simplicity.  The basic premise is that the most simple solution that meets the requirements is the best solution.

The vast majority of IT projects, even those that appear relatively straight forward run over time or over budget, or both.  This is despite rigorous project processes (e.g. SDLC), and well understood architectural frameworks (e.g. Zachman, TOGAF).

The reason for this is that none of the project processes or architectural frameworks directly address complexity.  They do provide much needed rigour around how projects are run, and ensure that architects and the business can talk the same language via the use of agreed frameworks, both of which add great value, but neither of which prevents unnecessary complexity from creeping into solutions.

In addition to both increased cost and time to delivery overly complex solutions are much harder to;

Maintain – which component is causing the issue when trouble shooting? Will patching one part impact others?

Secure – Simple solutions are easy to test and make secure from the outset, complex solutions are likely insecure from the outset and near impossible to fully understand and secure further down the line.

A further post will follow covering some of the techniques outlined in the book around understanding complexity and eliminating it from solutions to even complex problems.

In the mean time, keep it simple and remember just because your business or the problem you are trying to solve is complex that does not mean the solution needs to be complicated!

K

PCI-DSS Virtualisation Guidance

In what was obviously a response to my recent blog post stating
more detailed guidance would be helpful (yes I am that influential!) the ‘PCI
Security Standards Council Virtualisation Special Interest Group’ have just
released the ‘PCI DSS Virtualisation Guidelines’ Information Supplement.

This can be found here;

https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf

This is a welcome addition to the PCI-DSS as it makes the
requirements for handling card data in a virtual environment much more clear.
The use of the recommendations in this document along with the reference
architecture linked to in my previous post will provide a solid basis for
designing PCI-DSS compliant virtual environment.

The document itself is in 3 main sections. These comprise;

– ‘Virtualisation Overview’ which outlines the various components
of a virtual environment such as hosts, hypervisor, guests etc. and under what
circumstances they become in scope of the PCI-DSS

– ‘Risks for Virtualised Environments’ outlines the key risks
associated with keeping data safe in a virtual environment including the
increased attack surface or having a hypervisor, multiple functions per system,
in memory data potentially being saved to disk, Guests of different trust
levels on the same host etc. along with procedural issues such as a potential
lack of separation of duties.

– ‘Recommendations’; This section is the meat of the document that
will be of main interest to most of the audience as it details the PCI’s recommended
actions and best practices to meet the DSS requirements. This is split into 4
sections;

– General –
Covering broad topics such as evaluating risk, understanding the standard,
restricting physical access, defence in depth, hardening etc.   There is also a recommendation to review other guidance such as that from NIST (National Institute of Standards Technology), SANS (SysAdmin Audit Network Security) etc. – this is generally
good advice for any situation where a solid understanding of how to secure a
system is required.

– Recommendations for Mixed Mode Environments –

This is a key section for most businesses as the reality for most of us is that being able to run a mixed mode environment, (where guests in scope of PCI-DSS and guests not hosting card data are able to reside on the same hosts and virtual environment via acceptable logical separation), are the best option in order to gain the maximum benefits from virtualisation.  This section is rather shorter than expected with little detail other than many warnings about how difficult true separation can be.  On a bright note it does clearly
say that as long as separation of PCI-DSS guests and none PCI-DSS guests can be configured and I would imagine audited then this mode of operating is permitted.  Thus by separating the Virtual networks and segregating the guests into separate resource pools, along with the use of virtual IPS appliances and likely some sort of auditing (e.g. a netflow monitoring tool) it should be very possible to meet the DSS requirements in a mixed mode virtual environment.

– Recommendations for Cloud Computing Environments –

This section outlines various cloud scenarios such as Public / Private / Hybrid along with the different service offerings such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service).  Overall it is highlighted that in many cloud scenarios it may not be possible to meet PCI-DSS requirements due to the complexities around understanding where the data resides at all times and multi tenancy etc.

– Guidance for Assessing Risks in Virtual Environments –

This is a brief section outlining areas to consider when performing a risk assessment, these are fairly standard and include Defining the environment, Identifying threats and vulnerabilities.

Overall this is a useful step forward for the PCI-DSS as it clearly shows that the PCI are moving with the times and understanding that the use of virtual environments can indeed be secure providing it is well managed, correctly configured and audited.

If you want to make use of virtualisation for the benefits of consolidation, resilience and management etc. and your environment handles card data this along with the aforementioned reference architecture should be high on your reading list.

K

 

PCI-DSS compliance in a virtual environment

Version 2 of the PCI-DSS (Payment Card Industry – Digital Security Standard) that was released in October of last year (2010) finally added some much needed, if limited, clarification around the use of virtualised environments.

This change / clarification is an addition to section 2.2.1 of the standard, adding the statements;

Note: Where virtualization technologies are in use, implement only one primary function per virtual system component.

And

2.2.1.b If virtualization technologies are used, verify that only one primary function is implemented per virtual system component or device

While this does not clarify how to set up a virtual environment that handles card data to meet PCI-DSS it does at least make it very clear that the use of virtual environments is acceptable and can meet the standard.

This removes the previous confusion around the acceptability of of using virtualisation to host environments dealing with card data that stemmed from the statement in version one of the standard around each server having to have only a single function.  By definition the physical hosts in a virtualised environment host multiple guests (the virtual servers) and thus have multiple functions.

Despite not having as much detail as many had hoped this is a great step forward given the ever increasing adoption of virtualisation to reduce costs and make better use of server hardware.

This has also opened the door to the possibility of using cloud based services to provide a PCI-DSS compliant architecture.  During some recent research into virtual architecture that will meet the requirements of PCI-DSS 2 I came across this work from a combination of companies to provide a reference architecture for PCI-DSS compliance in a cloud based scenario;

http://info.hytrust.com/pci_reference_architecture_x1.html

The above links to both a webinar providing an overview of the work undertaken, and a white paper detailing the actual reference architecture.

The architecture design was undertaken by Cisco, VMWare, Savvis, Coalfire and Hytrust, and while the solution is understandably made up of the products and services offered by those companies, it clearly outlines a solution that you can adapt for your needs and make use of similar solutions that fit with your companies tech stack.  As such this is a highly recommended read for anyone involved in designing or auditing solutions that need to be PCI-DSS compliant.

K