ISSAP – Information Systems Security Architecture Professional

So, I recently received confirmation from the ISC2 (International Information Systems Security Certification Consortium) that I passed the ISSAP exam.   This is a secure architecture concentration in addition to the CISSP (Certified Information Systems Security Professional) certification.

While I believe this should be a worthwhile addition to my CISSP and of course my CV, while also helping progress my current role, I felt I should write a post about my preparation for the exam.

As with the CISSP (Certified Information Systems Security Professional) the best way to be prepared is to have a solid grounding in the subject matter – e.g. IT security and technical / solutions architecture.  Indeed several years of industry experience is a prerequisite for obtaining these certifications.

Also as with the CISSP I chose to cover off the bulk of the revision by using the ISC2 recommended course text.  With the CISSP I used the well regarded Shon Harris ‘CISSP all in one guide’ that was well written and very comprehensive.

For the ISSAP I used the ISC2 Official study guide to the CISSP-ISSAP.  Currently this is the only book specifically for the ISSAP exam that claims to cover all aspects of the exam.  Personally I found this book to be very badly written and hard to read.  The first chapter must have used the phrase ‘Confidentiality Integrity Availability’ in almost every sentence, yes we all know that CIA is important and what we are aiming for but there is no need to repeat it so often.

Other sections of the book only skimmed over areas that were quite heavily covered in the exam.

In short if you did not already have a very solid grounding and experience in the areas covered by the exam, this official guide would not be anywhere near enough to pass the exam.  Obviously the ISC2 may argue that you are supposed to have industry experience, but this does not necessarily include all the areas covered in the exam such as specific components of the common body of knowledge or other specific standards.

If you are a CISSP involved in designing secure architectures then this certainly seems like a worthwhile certification to go for.  I would advise doing some supplementary reading covering the Common Body of Knowledge and something like ‘Enterprise Security Architecture’ along with of course a solid background in both security and architecture.

As an aside I am a firm believer that study and / or involvement in IT related work such as creating white papers, contributing to open source etc. is a great way to not only improve your skills and knowledge, but also essential to show current and future employers that you are genuinely passionate about what you do rather than it just being a job.

K

Simplicity

In preparation for doing the Enterprise and Solution Architecture course and exam from the Chartered Institute for IT I have started reading the book ‘Simple Architectures for Complex Enterprises’ by Roger Sessions from Microsoft press.

While this is primarily a book focused on solution and enterprise architecture, the main point it focuses on is the often overlooked one of simplicity.  The basic premise is that the most simple solution that meets the requirements is the best solution.

The vast majority of IT projects, even those that appear relatively straight forward run over time or over budget, or both.  This is despite rigorous project processes (e.g. SDLC), and well understood architectural frameworks (e.g. Zachman, TOGAF).

The reason for this is that none of the project processes or architectural frameworks directly address complexity.  They do provide much needed rigour around how projects are run, and ensure that architects and the business can talk the same language via the use of agreed frameworks, both of which add great value, but neither of which prevents unnecessary complexity from creeping into solutions.

In addition to both increased cost and time to delivery overly complex solutions are much harder to;

Maintain – which component is causing the issue when trouble shooting? Will patching one part impact others?

Secure – Simple solutions are easy to test and make secure from the outset, complex solutions are likely insecure from the outset and near impossible to fully understand and secure further down the line.

A further post will follow covering some of the techniques outlined in the book around understanding complexity and eliminating it from solutions to even complex problems.

In the mean time, keep it simple and remember just because your business or the problem you are trying to solve is complex that does not mean the solution needs to be complicated!

K

PCI-DSS Virtualisation Guidance

In what was obviously a response to my recent blog post stating
more detailed guidance would be helpful (yes I am that influential!) the ‘PCI
Security Standards Council Virtualisation Special Interest Group’ have just
released the ‘PCI DSS Virtualisation Guidelines’ Information Supplement.

This can be found here;

https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf

This is a welcome addition to the PCI-DSS as it makes the
requirements for handling card data in a virtual environment much more clear.
The use of the recommendations in this document along with the reference
architecture linked to in my previous post will provide a solid basis for
designing PCI-DSS compliant virtual environment.

The document itself is in 3 main sections. These comprise;

– ‘Virtualisation Overview’ which outlines the various components
of a virtual environment such as hosts, hypervisor, guests etc. and under what
circumstances they become in scope of the PCI-DSS

– ‘Risks for Virtualised Environments’ outlines the key risks
associated with keeping data safe in a virtual environment including the
increased attack surface or having a hypervisor, multiple functions per system,
in memory data potentially being saved to disk, Guests of different trust
levels on the same host etc. along with procedural issues such as a potential
lack of separation of duties.

– ‘Recommendations’; This section is the meat of the document that
will be of main interest to most of the audience as it details the PCI’s recommended
actions and best practices to meet the DSS requirements. This is split into 4
sections;

– General –
Covering broad topics such as evaluating risk, understanding the standard,
restricting physical access, defence in depth, hardening etc.   There is also a recommendation to review other guidance such as that from NIST (National Institute of Standards Technology), SANS (SysAdmin Audit Network Security) etc. – this is generally
good advice for any situation where a solid understanding of how to secure a
system is required.

– Recommendations for Mixed Mode Environments –

This is a key section for most businesses as the reality for most of us is that being able to run a mixed mode environment, (where guests in scope of PCI-DSS and guests not hosting card data are able to reside on the same hosts and virtual environment via acceptable logical separation), are the best option in order to gain the maximum benefits from virtualisation.  This section is rather shorter than expected with little detail other than many warnings about how difficult true separation can be.  On a bright note it does clearly
say that as long as separation of PCI-DSS guests and none PCI-DSS guests can be configured and I would imagine audited then this mode of operating is permitted.  Thus by separating the Virtual networks and segregating the guests into separate resource pools, along with the use of virtual IPS appliances and likely some sort of auditing (e.g. a netflow monitoring tool) it should be very possible to meet the DSS requirements in a mixed mode virtual environment.

– Recommendations for Cloud Computing Environments –

This section outlines various cloud scenarios such as Public / Private / Hybrid along with the different service offerings such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service).  Overall it is highlighted that in many cloud scenarios it may not be possible to meet PCI-DSS requirements due to the complexities around understanding where the data resides at all times and multi tenancy etc.

– Guidance for Assessing Risks in Virtual Environments –

This is a brief section outlining areas to consider when performing a risk assessment, these are fairly standard and include Defining the environment, Identifying threats and vulnerabilities.

Overall this is a useful step forward for the PCI-DSS as it clearly shows that the PCI are moving with the times and understanding that the use of virtual environments can indeed be secure providing it is well managed, correctly configured and audited.

If you want to make use of virtualisation for the benefits of consolidation, resilience and management etc. and your environment handles card data this along with the aforementioned reference architecture should be high on your reading list.

K

 

PCI-DSS compliance in a virtual environment

Version 2 of the PCI-DSS (Payment Card Industry – Digital Security Standard) that was released in October of last year (2010) finally added some much needed, if limited, clarification around the use of virtualised environments.

This change / clarification is an addition to section 2.2.1 of the standard, adding the statements;

Note: Where virtualization technologies are in use, implement only one primary function per virtual system component.

And

2.2.1.b If virtualization technologies are used, verify that only one primary function is implemented per virtual system component or device

While this does not clarify how to set up a virtual environment that handles card data to meet PCI-DSS it does at least make it very clear that the use of virtual environments is acceptable and can meet the standard.

This removes the previous confusion around the acceptability of of using virtualisation to host environments dealing with card data that stemmed from the statement in version one of the standard around each server having to have only a single function.  By definition the physical hosts in a virtualised environment host multiple guests (the virtual servers) and thus have multiple functions.

Despite not having as much detail as many had hoped this is a great step forward given the ever increasing adoption of virtualisation to reduce costs and make better use of server hardware.

This has also opened the door to the possibility of using cloud based services to provide a PCI-DSS compliant architecture.  During some recent research into virtual architecture that will meet the requirements of PCI-DSS 2 I came across this work from a combination of companies to provide a reference architecture for PCI-DSS compliance in a cloud based scenario;

http://info.hytrust.com/pci_reference_architecture_x1.html

The above links to both a webinar providing an overview of the work undertaken, and a white paper detailing the actual reference architecture.

The architecture design was undertaken by Cisco, VMWare, Savvis, Coalfire and Hytrust, and while the solution is understandably made up of the products and services offered by those companies, it clearly outlines a solution that you can adapt for your needs and make use of similar solutions that fit with your companies tech stack.  As such this is a highly recommended read for anyone involved in designing or auditing solutions that need to be PCI-DSS compliant.

K

Cheap IOPS, Expensive Gigabytes…

Recently we implemented a fast storage solution to meet the needs of a growing (and horribly non-relational and un-normalised) database.  While actually a very simple solution from an architectural standpoint it was one of those products that performs exactly as advertised and has impressed us immensely with it’s performance, so I wanted to briefly write about it should anyone reading this need a fast server based storage solution.

The products in question are from a company called Fusion-IO, and are called ioDrives, the same drives are also available through HP who re-brand them as Accelerator IO cards.

The title of this post was actually taken from a conversation with one of the guys from Fusion-IO when we were evaluating the performance of their cards.  He highlighted that what they effectively do is ‘make IOPS cheap and Gigabytes expensive’.

IOPS = Input Output operations Per Second – basically how many times the device (hard disk / SSD / Array) can read or write to itself per second.

Reading the performance statistics for the cards provided some very impressive statistics, but obviously we wanted to prove these for ourselves, in our environment with our hardware and simulation of our typical workload (the mix of reads and writes we typically see from the application).

The cards used in our testing and subsequently our production environment are the 640GB MLC cards, details of which can be found here;

http://www.fusionio.com/products/iodriveduo/

We tested the performance in an HO DL580 G7 with 4 * 8 core Xeon CPUs @ 2.26GHz and 128GB RAM, using the SQLIO Disk Subsystem Benchmark Tool from Microsoft.

The results certainly met our expectations, we saw >90,000 IOPs from a mixed read / write configuration via the SQLIO tool in our real world scenario.  This is particularly impressive given the use of very ‘normal’ off the shelf hardware components.

We were comfortable enough with the results that we recommended the use of these cards for a critical production system in a mirrored (RAID 1 across 2 cards) configuration, and as mentioned have since implemented this solution with great success.

It is worth noting that the cards do have a considerable amount of built in resilience and redundancy so not all implementations would require mirrored cards, in fact according to the vendor most implementations are not RAID 1 and they rarely if ever see any issues.

Before signing off this post I should mention my college Ben Cox who is our local DBA extraordinaire as he actually ran the tests and documented the outputs.

K

Architecture in turbulent times – part 2

This post will follow on from the previous ‘Architecture in turbulent times’ post covering some of the shifting demands on architects and further highlighting ways we can add value even during this tougher economic period.

The first thing to do is ensure you understand the shifting demands on the architect, and the business; doing this will ensure you remain at the centre of major IT decisions, either by making or advising on those decisions.

These changing demands may include areas such as;

–         Reducing project costs and doing more with existing or even less resource (as per the previously mentioned increasing efficiencies), with many businesses having a strong focus on reducing or managing Opex (Operating expenses).

–         Maintaining and encouraging talent, this may sound strange in the current environment, but keeping and providing a career path and training for talented employees is key during tough times.  Use this as an opportunity to train, mentor and encourage others in your department.

–         Outsourcing – in line with innovation and cost efficiency, are there repeatable processes that could be outsourced? Do new technologies and services such as Platform as a Service enable the outsourcing of technology to manage / reduce costs?

Use and develop your skills in areas including;

–         Negotiation and inspiration – this will enable you to gain buy in for your vision / plans, and get people motivated to drive changes forwards.

–         Problem solving / issue assessment – the ability to quickly and where required tactically resolve problems is more important than ever, and the architects ability to do this while looking holistically at the bigger picture is where we can add great value.

–         Understanding the business and their processes is as always a key component of the architect’s role.  We love technology, but it is the understanding of the business requirements and the ability to provide the simplest and most cost effective technical solutions to these requirements rather than just technology itself that is critical.

We need to think more tactically while still maintaining a holistic view of our business and the environment in which it trades (e.g. relevant regulatory considerations).  In this way the role of the architect remains key, and will ensure that the current technical solutions meet the business requirements of now around optimisation and simplification while being flexible enough to allow the business to grow and capitalise on any improvements in the economy.

I am thinking of the new focus as being tactically strategic, or strategically tactical! 

K

Architecture in turbulent times – part 1

Given the current economic climate it seemed timely to give some thought to the role of the architect during this period of uncertainty.  The role of the architect is traditionally a relatively strategic one.  While we all get into the nitty gritty of technical detail on a fairly regular basis (how often is likely dependant on where your role fits in the spectrum from technical type role to enterprise type roles), one of the primary focuses is usually to strategically align IT with the course of the business, to ensure interoperability across IT projects and solutions, and to define and lead delivery against technology roadmaps.

This all works very well in a relatively stable environment where the business is growing and the economy is understood.  In the current economy many businesses do not have longer term plans / roadmaps as the environment has been so unstable and changing at such a fast rate.

So in this environment how is the architect to add value?

I believe rather than this being a negative time for architects it actually offers a huge opportunity for innovation, adding value and growing the reputation of yourself and your role within the organisation!

Move away from looking at long term strategic goals (where possible of course keep these in mind), and instead focus on using sound architectural practices (e.g. patterns) and ways to improve efficiency and drive near term revenue.

Think in terms of creating new efficiencies via areas such as;

–          Improving business process flows

–          Ensuring solutions are as flexible as possible to meet the changing business needs

–          Ensuring the architecture is flexible and will scale both up and down (e.g. using Service Orientated Architecture principles)

–          Optimise your infrastructure – e.g. by virtualising servers and desktops where possible, thin provisioning storage.

–          Deliver user focused solutions – work with the business to understand how they work and ensure delivered applications allow them to work as efficiently as possible

–          Enable / facilitate collaborative working

–          Work with the business and your BI (Business Intelligence) team to maximise the value derived from the datawarehouse and associated reporting architecture.

In short if you focus on driving efficiencies and cost effective new capabilities as an architect these can be very rewarding and interesting times.  Part 2 to follow.

K