Simplicity

In preparation for doing the Enterprise and Solution Architecture course and exam from the Chartered Institute for IT I have started reading the book ‘Simple Architectures for Complex Enterprises’ by Roger Sessions from Microsoft press.

While this is primarily a book focused on solution and enterprise architecture, the main point it focuses on is the often overlooked one of simplicity.  The basic premise is that the most simple solution that meets the requirements is the best solution.

The vast majority of IT projects, even those that appear relatively straight forward run over time or over budget, or both.  This is despite rigorous project processes (e.g. SDLC), and well understood architectural frameworks (e.g. Zachman, TOGAF).

The reason for this is that none of the project processes or architectural frameworks directly address complexity.  They do provide much needed rigour around how projects are run, and ensure that architects and the business can talk the same language via the use of agreed frameworks, both of which add great value, but neither of which prevents unnecessary complexity from creeping into solutions.

In addition to both increased cost and time to delivery overly complex solutions are much harder to;

Maintain – which component is causing the issue when trouble shooting? Will patching one part impact others?

Secure – Simple solutions are easy to test and make secure from the outset, complex solutions are likely insecure from the outset and near impossible to fully understand and secure further down the line.

A further post will follow covering some of the techniques outlined in the book around understanding complexity and eliminating it from solutions to even complex problems.

In the mean time, keep it simple and remember just because your business or the problem you are trying to solve is complex that does not mean the solution needs to be complicated!

K

PCI-DSS Virtualisation Guidance

In what was obviously a response to my recent blog post stating
more detailed guidance would be helpful (yes I am that influential!) the ‘PCI
Security Standards Council Virtualisation Special Interest Group’ have just
released the ‘PCI DSS Virtualisation Guidelines’ Information Supplement.

This can be found here;

https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf

This is a welcome addition to the PCI-DSS as it makes the
requirements for handling card data in a virtual environment much more clear.
The use of the recommendations in this document along with the reference
architecture linked to in my previous post will provide a solid basis for
designing PCI-DSS compliant virtual environment.

The document itself is in 3 main sections. These comprise;

– ‘Virtualisation Overview’ which outlines the various components
of a virtual environment such as hosts, hypervisor, guests etc. and under what
circumstances they become in scope of the PCI-DSS

– ‘Risks for Virtualised Environments’ outlines the key risks
associated with keeping data safe in a virtual environment including the
increased attack surface or having a hypervisor, multiple functions per system,
in memory data potentially being saved to disk, Guests of different trust
levels on the same host etc. along with procedural issues such as a potential
lack of separation of duties.

– ‘Recommendations’; This section is the meat of the document that
will be of main interest to most of the audience as it details the PCI’s recommended
actions and best practices to meet the DSS requirements. This is split into 4
sections;

– General –
Covering broad topics such as evaluating risk, understanding the standard,
restricting physical access, defence in depth, hardening etc.   There is also a recommendation to review other guidance such as that from NIST (National Institute of Standards Technology), SANS (SysAdmin Audit Network Security) etc. – this is generally
good advice for any situation where a solid understanding of how to secure a
system is required.

– Recommendations for Mixed Mode Environments –

This is a key section for most businesses as the reality for most of us is that being able to run a mixed mode environment, (where guests in scope of PCI-DSS and guests not hosting card data are able to reside on the same hosts and virtual environment via acceptable logical separation), are the best option in order to gain the maximum benefits from virtualisation.  This section is rather shorter than expected with little detail other than many warnings about how difficult true separation can be.  On a bright note it does clearly
say that as long as separation of PCI-DSS guests and none PCI-DSS guests can be configured and I would imagine audited then this mode of operating is permitted.  Thus by separating the Virtual networks and segregating the guests into separate resource pools, along with the use of virtual IPS appliances and likely some sort of auditing (e.g. a netflow monitoring tool) it should be very possible to meet the DSS requirements in a mixed mode virtual environment.

– Recommendations for Cloud Computing Environments –

This section outlines various cloud scenarios such as Public / Private / Hybrid along with the different service offerings such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service).  Overall it is highlighted that in many cloud scenarios it may not be possible to meet PCI-DSS requirements due to the complexities around understanding where the data resides at all times and multi tenancy etc.

– Guidance for Assessing Risks in Virtual Environments –

This is a brief section outlining areas to consider when performing a risk assessment, these are fairly standard and include Defining the environment, Identifying threats and vulnerabilities.

Overall this is a useful step forward for the PCI-DSS as it clearly shows that the PCI are moving with the times and understanding that the use of virtual environments can indeed be secure providing it is well managed, correctly configured and audited.

If you want to make use of virtualisation for the benefits of consolidation, resilience and management etc. and your environment handles card data this along with the aforementioned reference architecture should be high on your reading list.

K

 

PCI-DSS compliance in a virtual environment

Version 2 of the PCI-DSS (Payment Card Industry – Digital Security Standard) that was released in October of last year (2010) finally added some much needed, if limited, clarification around the use of virtualised environments.

This change / clarification is an addition to section 2.2.1 of the standard, adding the statements;

Note: Where virtualization technologies are in use, implement only one primary function per virtual system component.

And

2.2.1.b If virtualization technologies are used, verify that only one primary function is implemented per virtual system component or device

While this does not clarify how to set up a virtual environment that handles card data to meet PCI-DSS it does at least make it very clear that the use of virtual environments is acceptable and can meet the standard.

This removes the previous confusion around the acceptability of of using virtualisation to host environments dealing with card data that stemmed from the statement in version one of the standard around each server having to have only a single function.  By definition the physical hosts in a virtualised environment host multiple guests (the virtual servers) and thus have multiple functions.

Despite not having as much detail as many had hoped this is a great step forward given the ever increasing adoption of virtualisation to reduce costs and make better use of server hardware.

This has also opened the door to the possibility of using cloud based services to provide a PCI-DSS compliant architecture.  During some recent research into virtual architecture that will meet the requirements of PCI-DSS 2 I came across this work from a combination of companies to provide a reference architecture for PCI-DSS compliance in a cloud based scenario;

http://info.hytrust.com/pci_reference_architecture_x1.html

The above links to both a webinar providing an overview of the work undertaken, and a white paper detailing the actual reference architecture.

The architecture design was undertaken by Cisco, VMWare, Savvis, Coalfire and Hytrust, and while the solution is understandably made up of the products and services offered by those companies, it clearly outlines a solution that you can adapt for your needs and make use of similar solutions that fit with your companies tech stack.  As such this is a highly recommended read for anyone involved in designing or auditing solutions that need to be PCI-DSS compliant.

K

Security as a Service – Category and Threat Definitions

We are currently in phase one of producing the Security as a Service guidance documentation;

–          Agreeing and documenting categories of service and their definitions

–          Agreeing and documenting categories of threats and their definitions

So far the top five categories of service are;

    1. IAM
    2. DLP
    3. Secure Web Gateway
    4. Vulnerability Assessments
    5. Pen Testing
    6. Intrusion Detection
    7. Encryption
    8. Log Management

With several further categories in the mix.  We will be looking to consolidate the above categories and the others identified into sensible easy to understand groupings.   For example it is likely that ‘vulnerability assessment’ and ‘pen testing’ will be a single category.

The top categories of threat identified are currently;

    1. Data Loss Leakage
    2. Traffic Hijacking
    3. Unauthorized Access
    4. Denial of Service
    5. Application Vulnerabilities

With about forty further ideas being assessed in the same way as for categories of service.

Should you have any ideas please do let me know either by posting a comment on this blog or by mailing me on LinkedIn, any assistance is greatly welcomed!

K

 

Cloud Security as a Service RSA conference presentation

An overview of the Cloud Security as a Service (SecaaS) working group goals, outputs and proposed timeline was presented at the RSA conference on the 14th of February.  His has been recorded for prosperity and uploaded to YouTube.  The presentation can be found here;

http://www.youtube.com/watch?v=fzejQuSR_xU

This gives a great update on one of the things I’ll be working on during the next few months.  Check the video out, fell free to ask me any questions you have, and of course if interested get involved and provide feedback via the surveys mentioned in the presentation.

K

Cloud Security Alliance – Security as a Service

For those interested in cloud security options, I am currently on the steering committee for the Security as a Service (SecaaS) working group.  In this instance I mean how cloud computing can be used to secure everything, including cloud and non cloud based IT, rather than how to secure cloud computing (paraphrased from Jim Reavis).

If you are not familiar with the Cloud Security Alliance I suggest you check out their site, it is a great resource for all things cloud security related and can be found here;

http://www.cloudsecurityalliance.org/

The purpose of the specific SecaaS working group is to;

 – Identify consensus definitions of what security as a Service means

 – Categorise the different types of Security as a Service

 – Provide guidance to organisations on reasonable implementation practices

The site specific to the SecaaS work can be found here;

http://www.cloudsecurityalliance.org/secaas.html

Proposed timelines for the work we produce are for;

 – Categories of service to be defined by late April.

 – Draft SecaaS Guidance, mid-May.

 – SME Guide, mid-July.

 – Final Draft SecaaS Guidance, mid-September.

This should be a great piece of work so I will keep you updated with our progress.

K

Cloud; Barriers to adoption

My second post relating to cloud computing will focus at a high level on what seems to be the current major barrier(s) to the wider adoption of cloud use by businesses.

Future posts will likely go into more detail around technical threats such as side channel attacks (e.g. trying to connect to the target guest server from another guest known to be on the same host) and cartography (“mapping” the target environment by methods such as traffic sniffing and analysis), but this one will focus on providing a high level overview of the risks and fears around moving to the ‘cloud’.

It is already clear that in many instances the elasticity (ability to scale up and down on demand), resilience and cost vs. hosting services internally can offer clear benefits to businesses.  So why then are many businesses reticent to move completely or even partially into the cloud?

Outside of any general resistance to change the main concern is with security and regulatory requirements.

When infrastructure and applications are hosted internally you intrinsically feel that your data, and that of your customers, is safer.  Outside of potential ‘insider’ threats, data on your servers in your server room is inside your companies perimeter, no matter how porous this may be, protected by your firewall(s), AV(Anti Virus), DLP (Data Leakage Protection) tools, trusted staff and company policies.  Even when the data leaves site it is likely on managed, and hopefully encrypted, tapes or via a managed, and hopefully encrypted, network link to a DR / BCP site.

Now when you move to using the cloud in some way your systems and data are hosted elsewhere, potentially moving across multiple physical servers or even datacentres outside of your control.  This movement along with the environment being shared by other companies (e.g. multiple businesses may have guests on the same physical host) are the primary drivers of fear around the security of systems in the cloud.  Using the cloud also obviously shares various concerns with other forms of hosting / co-location around third party access to data etc.

Hand in hand with security are regulatory / compliance concerns that also stem from the above features of using the cloud;

–          Who can audit the systems and overall cloud?

–          Does the data move across state boundaries (e.g. does it leave the UK or the EU?)

–          Who could potentially access the data?

–          What happens in a disaster recovery scenario?

–          How can you move to another provider? (Vendor lock in concerns)

–          How is the data deleted from the cloud (data retention / incomplete deletion concerns).

Various measures exist to mitigate the risks, these include –

–          Procedural; Ensuring due diligence is carried out prior to engaging the vendor and contracts are in place to ensure adherence to legal / regulatory requirements.

–          Security checks; Regular penetration tests and other security checks of the vendors systems and facilities should be carried out, and any issues identified remediated within agreed time frames

–          Encryption; ensure all sensitive data (ideally all data if possible) is encrypted in transit and at rest – this prevents prying and mitigates risk of data not being deleted

–          Authentication; ensure all systems in the cloud utilise strong authentication methods to prevent unauthorised access.

The ENISA (European Network and Information Security Agency) report titled ‘Cloud Computing Security Risk Assessment’ neatly sums up the benefits of cloud and the security concerns;

The key conclusion of this paper is that the cloud’s economies of scale and flexibility are both a friend and a foe from a security point of view. The massive concentrations of resources and data present a more attractive target to attackers, but cloud-based defences can be more robust, scalable and cost-effective.

K

Real security – Safety vs. Liberty

Reading Bruce Schneier’s Crypto-gram from December 2010, this echoes conversations I have had many times.  How much of the extra checks and surveillance we go though at airports etc. actually improves our safety, and how much is for appearance to make us feel like governments are taking action.

Read the article here:

http://www.schneier.com/crypto-gram-1012.html

These same sentiments can and should (must) be applied to IT security in the workplace as well.  To often it is easy to be swayed by the hype of the latest products and fear of risks that are in reality extremely unlikely to actually occur.  Rational security and a clear understanding of the actual risk should be the drivers for any security requirements.

In a given scenario the cost of implementing the security measure (technology and process costs) should not be greater than the likely hood of issue X occurring (e.g. once in 10 years) * the total cost if the issue occurs (lost business, reputational damage etc.).

This situation is not helped by the security industry itself, it must be remembered that they companies selling IT security products and services are in the business of selling these products and services!  In order to do this it is in their interests to hype the risks and generate a culture of fear.

Of course I am in no way suggesting that there are not a myriad of threats from viruses / worms / trojans etc. to organised crime, botnets and of course the insider threat.  But these should be assessed in a balanced and rational manner that seeks to understand the risk to the actual system and data that is being protected.

This brings me back around to my favourite topic (read soapbox); requirements and architecture / design.  I firmly believe that making the right design choices early on in a systems life-cycle will minimise any security risks and also minimise the challenges associated with securing a system further down the line.  This is one of the main reasons moved into working in Architecture from working in the purely IT security field, as so many of the issues we solve in security every day can be resolved / designed out with the proper consideration at the design face of implementing a system / solution.

K