2011 review

As is often the tradition I thought I would start the year with a couple of posts covering an overview of some key points from the last year, and some planned projects for this year.

As I am sure you have guessed this post will be a brief review of 2011 from a study / career / research perspective.

2011 was a pretty busy year with cloud security research, masters work and finally realising my previous role was no longer offering much/any challenge; culminating in moving to a new role at the end of the year / start of 2012.

From a study perspective I completed two more MSc modules;

– Wireless mobile and ad-hoc networking

– Secure systems programming

Assuming I pass the secure systems programming module (final piece of coursework was completed 9/1/12) there is ‘just’ the project left to complete in order to finish my masters.

Also on a Study front I achieved a couple of certifications;

– ISSAP (Information Systems Security Architecture Professional).  This is a secure architecture addition to the CISSP (Certified Information Systems Security Professional).

– British Computer Society Enterprise and Solutions Architecture certificate.

So all in all a successful and reasonably productive year from a study / certification perspective, especially if I have managed to pass the secure coding module!

From a career perspective I has been looking around within my previous company for a little while but decided that I was stagnating in my previous role so it was time to look outside in order to move on.  The good news is I was successful, being offered a considerably improved role as a Senior Systems Architect with Canada Life that I started 3/1/12.  I’ll update on how this is going and any non propriety technologies / projects I am working on in upcoming posts.

From a research / general learning perspective 2011 was the year of the cloud.  As anyone who has read this blog knows I have been very involved in work defining Security as a Service (SecaaS) with the Cloud Security Alliance, chairing the research group on this topic.  This has resulted in a paper being published and SecaaS being added as a new domain to the CSA guidance.

I’ll follow this post with one detailing some of my plans and projects for 2012.

K

 

Choosing the right project(s)

Choosing the right projects to focus limited resources on is clearly key to the success of any business.

When projects / programs are prioritised in in your (most) businesses is this always done using the best and most objective methods available?  How are they chosen in your organisation?  How are the chosen projects and programs then prioritised against each other?

Most organisations will no doubt claim to have a very organised and agreed approach to this process based around business priorities and the clear business benefits from each project being considered.  If you look more closely though the reality is often very different with processes like these;

–          Which project is sponsored by the most senior individual in the organisation?

–          Which project is being pushed by the most aggressive sponsors/ individuals?

–          Which project has the best sales pitch (e.g. best presentation)?

–          Which project is being pujshed by the sponsors / individuals with the best political connections in the organisation?

–          Whish project will provide the greatest return on investment (ROI)?

While I am sure you are thinking that ROI sounds like a reasonable choice for choosing projects, and indeed used 100% impartially it can be, however it easy to manipulate ROI figures and most ROI statements such as “will save xx millions” have little supporting, reproducible, evidence.  Also, in reality how many projects thoroughly calculate the ROI on a project after it is completed and hold those who made the statements accountable for their accuracy?

In addition to the above thoughts on how projects are chosen, it is also clear that the more projects an organisation has to choose from the less time they are likely to be able to put into correctly choosing the best projects for that organisation.

One logical approach to the process of choosing and prioritising the best projects for your organisation is that of value graph analysis.  Interestingly this process has come up twice recently, in the book ‘Simple Architectures for Complex Enterprises’ and on the recent ISEB Enterprise and Solution Architecture course I attended.

The idea of Value Graph Analysis is that it allow you to impartially take into account factors such as the risks of doing or not doing the project, the cost of doing the project, the potential returns of doing the project, the time and resource requirements to complete the project.

While the included factors in a graph can be tailored, both sources that highlighted this approach suggested the same set of default / typical factors;

–          Market Drivers – what market reasons support the project?

–          Cost – what is the project cost?

–          Organisational Risk – what are the risk factors the project addresses?

–          Financial Value – what are the financial benefits of doing the project?

–          Organisational Preparedness – how ready is the organisation to complete the project?

–          Team Readiness – how ready is the proposed project team to complete the project?

–          Status Quo – what are the outcomes / impacts of not doing the project?

The output of assessing all the above factors is the Value Graph, an example of which is shown below as a spider graph;

Spider diagram value graph
value graph example

Values closer to the edge of the graph are considered positive.  Aside from ensuring a wide range of key inputs are included in the prioritisation process, a key advantage is that Value Graphs, especially when using the spider graph representation, enable easy comparison of projects to define priorities by comparing the relevant graphs for those projects.

I recommend checking these out; creating Value Graphs for your projects will enable clear and logical prioritisation and will definitely benefit your organisation in the long term!

K

BCS / ISEB Certificate in Enterprise and Solution Architecture

This week I attended the BCS (British Computer Society, that refers to itself as ‘The Chartered Institute for IT’) ISEB (Information Systems Exam Board) ‘Certificate in Enterprise and Solution Architecture – Intermediate’ four day course and exam.  I’ll share my thoughts and impressions of the course and exam.

One the first day my hopes of the week being useful were actually low, as with many of these courses the main purpose was clearly about learning by wrote the facts required to pass the exam.  While this did indeed turn out to be true, the course turned out to be a lot more useful than expected.

This was due to a combination of factors;

– the instructor / tutor we had was actually very knowledgeable around the various architecture frameworks / ontologies such as TOGAF and Zachman.

– interaction with peers from a variety of industries and backgrounds.  As with any course / conference this is one of the key benefits as it gives you the opportunity to gain a wider viewpoint and see how developers and business analysts etc from different industries view architectures / business issues and what their concerns are.

– As the exams itself is largely about architect roles, frameworks and how they link together the course provides a good insight and overview of some of the most common frameworks and how there different terminologies used relate to each other.

If you want to know more about the course and topics covered, or just gain a greater insight into enterprise and solution architecture terminology then the web sight maintained by our tutor is is a great starting point;

http://grahamberrisford.com

Which also gives a clue as to his name.  If you do want to do the course and are UK / London based I’d recommend choosing a course with him instructing if you can, as he has many years experience in IT and the course material.  Graham also has some strong ideas and opinions which made for some great classroom debates.

I’d recommend this course to anyone wanting to improve their knowledge of enterprise / solution architecture frameworks, tools and terminology whether this is to aid a career in architecture, or just to better understand the concerns and considerations of the architects you work with.  Don’t get me wrong, the overall material is pretty dry as is the case with many courses around frameworks and terminology etc. but overall this course was well worthwhile.

Onto the exam – there is not a lot to say here, it is a one hour, forty question multiple choice affair.  If you have paid attention in the class and have a reasonable understanding of the reference model (pdf can be freely downloaded from the BCS web site or use the slides coving it from Graham’s site) , you should find the exam pretty easy, he says not having yet received confirmation of passing it!

K

Partitioning Architecture for Simplicity

One of the key learnings from the book ‘Simple Architectures for Complex Enterprises’ is around partitioning of functions in order to simplify the overall architecture.

Partitioning in this sense is a mathematical term for splitting up the area in question into sections (partitions) such that any item is in one and only one partition, and that every item in a given partition shares the feature or features that define membership of that partition.

An example would be dividing store stock by price e.g. items at £5, items at £10 and items at £15.  All items in the items at £5 partition would share the property of costing £5 and be found in only the £5 partition and no other partitions.

Partitions can be chosen arbitrarily as per the above example (which may be a valid way for a store to partition its good for stock taking or whatever), but in when talking about technical architectures for a business more thought needs to go into how to partition up the architecture to enable simplification, but also to create valid and usable partitions.

Within an architecture it transpires that the best way (or one of the best ways) to partition a system is to look at the functionality of components and then assess whether a components functionality is autonomous with regards to other components or synergistic.  E.g. do the components in question have a mutual dependency on each other (if so they have a synergistic relationship), if they do not have this mutual dependency they can be considered autonomous of each other.

By separating an enterprise architecture up in this way we end up with a group of partitions where all components in each partition have synergistic relationships to each other, but components in separate partitions are autonomous in relation to each other.

In addition to being likely the best way to partition an architecture for simplicity while maintaining a manageable number of partitions, this method also translates well to the use of Service Orientated Architectures (SOA).

K

ISSAP – Information Systems Security Architecture Professional

So, I recently received confirmation from the ISC2 (International Information Systems Security Certification Consortium) that I passed the ISSAP exam.   This is a secure architecture concentration in addition to the CISSP (Certified Information Systems Security Professional) certification.

While I believe this should be a worthwhile addition to my CISSP and of course my CV, while also helping progress my current role, I felt I should write a post about my preparation for the exam.

As with the CISSP (Certified Information Systems Security Professional) the best way to be prepared is to have a solid grounding in the subject matter – e.g. IT security and technical / solutions architecture.  Indeed several years of industry experience is a prerequisite for obtaining these certifications.

Also as with the CISSP I chose to cover off the bulk of the revision by using the ISC2 recommended course text.  With the CISSP I used the well regarded Shon Harris ‘CISSP all in one guide’ that was well written and very comprehensive.

For the ISSAP I used the ISC2 Official study guide to the CISSP-ISSAP.  Currently this is the only book specifically for the ISSAP exam that claims to cover all aspects of the exam.  Personally I found this book to be very badly written and hard to read.  The first chapter must have used the phrase ‘Confidentiality Integrity Availability’ in almost every sentence, yes we all know that CIA is important and what we are aiming for but there is no need to repeat it so often.

Other sections of the book only skimmed over areas that were quite heavily covered in the exam.

In short if you did not already have a very solid grounding and experience in the areas covered by the exam, this official guide would not be anywhere near enough to pass the exam.  Obviously the ISC2 may argue that you are supposed to have industry experience, but this does not necessarily include all the areas covered in the exam such as specific components of the common body of knowledge or other specific standards.

If you are a CISSP involved in designing secure architectures then this certainly seems like a worthwhile certification to go for.  I would advise doing some supplementary reading covering the Common Body of Knowledge and something like ‘Enterprise Security Architecture’ along with of course a solid background in both security and architecture.

As an aside I am a firm believer that study and / or involvement in IT related work such as creating white papers, contributing to open source etc. is a great way to not only improve your skills and knowledge, but also essential to show current and future employers that you are genuinely passionate about what you do rather than it just being a job.

K

Simplicity

In preparation for doing the Enterprise and Solution Architecture course and exam from the Chartered Institute for IT I have started reading the book ‘Simple Architectures for Complex Enterprises’ by Roger Sessions from Microsoft press.

While this is primarily a book focused on solution and enterprise architecture, the main point it focuses on is the often overlooked one of simplicity.  The basic premise is that the most simple solution that meets the requirements is the best solution.

The vast majority of IT projects, even those that appear relatively straight forward run over time or over budget, or both.  This is despite rigorous project processes (e.g. SDLC), and well understood architectural frameworks (e.g. Zachman, TOGAF).

The reason for this is that none of the project processes or architectural frameworks directly address complexity.  They do provide much needed rigour around how projects are run, and ensure that architects and the business can talk the same language via the use of agreed frameworks, both of which add great value, but neither of which prevents unnecessary complexity from creeping into solutions.

In addition to both increased cost and time to delivery overly complex solutions are much harder to;

Maintain – which component is causing the issue when trouble shooting? Will patching one part impact others?

Secure – Simple solutions are easy to test and make secure from the outset, complex solutions are likely insecure from the outset and near impossible to fully understand and secure further down the line.

A further post will follow covering some of the techniques outlined in the book around understanding complexity and eliminating it from solutions to even complex problems.

In the mean time, keep it simple and remember just because your business or the problem you are trying to solve is complex that does not mean the solution needs to be complicated!

K

PCI-DSS Virtualisation Guidance

In what was obviously a response to my recent blog post stating
more detailed guidance would be helpful (yes I am that influential!) the ‘PCI
Security Standards Council Virtualisation Special Interest Group’ have just
released the ‘PCI DSS Virtualisation Guidelines’ Information Supplement.

This can be found here;

https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf

This is a welcome addition to the PCI-DSS as it makes the
requirements for handling card data in a virtual environment much more clear.
The use of the recommendations in this document along with the reference
architecture linked to in my previous post will provide a solid basis for
designing PCI-DSS compliant virtual environment.

The document itself is in 3 main sections. These comprise;

– ‘Virtualisation Overview’ which outlines the various components
of a virtual environment such as hosts, hypervisor, guests etc. and under what
circumstances they become in scope of the PCI-DSS

– ‘Risks for Virtualised Environments’ outlines the key risks
associated with keeping data safe in a virtual environment including the
increased attack surface or having a hypervisor, multiple functions per system,
in memory data potentially being saved to disk, Guests of different trust
levels on the same host etc. along with procedural issues such as a potential
lack of separation of duties.

– ‘Recommendations’; This section is the meat of the document that
will be of main interest to most of the audience as it details the PCI’s recommended
actions and best practices to meet the DSS requirements. This is split into 4
sections;

– General –
Covering broad topics such as evaluating risk, understanding the standard,
restricting physical access, defence in depth, hardening etc.   There is also a recommendation to review other guidance such as that from NIST (National Institute of Standards Technology), SANS (SysAdmin Audit Network Security) etc. – this is generally
good advice for any situation where a solid understanding of how to secure a
system is required.

– Recommendations for Mixed Mode Environments –

This is a key section for most businesses as the reality for most of us is that being able to run a mixed mode environment, (where guests in scope of PCI-DSS and guests not hosting card data are able to reside on the same hosts and virtual environment via acceptable logical separation), are the best option in order to gain the maximum benefits from virtualisation.  This section is rather shorter than expected with little detail other than many warnings about how difficult true separation can be.  On a bright note it does clearly
say that as long as separation of PCI-DSS guests and none PCI-DSS guests can be configured and I would imagine audited then this mode of operating is permitted.  Thus by separating the Virtual networks and segregating the guests into separate resource pools, along with the use of virtual IPS appliances and likely some sort of auditing (e.g. a netflow monitoring tool) it should be very possible to meet the DSS requirements in a mixed mode virtual environment.

– Recommendations for Cloud Computing Environments –

This section outlines various cloud scenarios such as Public / Private / Hybrid along with the different service offerings such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service).  Overall it is highlighted that in many cloud scenarios it may not be possible to meet PCI-DSS requirements due to the complexities around understanding where the data resides at all times and multi tenancy etc.

– Guidance for Assessing Risks in Virtual Environments –

This is a brief section outlining areas to consider when performing a risk assessment, these are fairly standard and include Defining the environment, Identifying threats and vulnerabilities.

Overall this is a useful step forward for the PCI-DSS as it clearly shows that the PCI are moving with the times and understanding that the use of virtual environments can indeed be secure providing it is well managed, correctly configured and audited.

If you want to make use of virtualisation for the benefits of consolidation, resilience and management etc. and your environment handles card data this along with the aforementioned reference architecture should be high on your reading list.

K

 

PCI-DSS compliance in a virtual environment

Version 2 of the PCI-DSS (Payment Card Industry – Digital Security Standard) that was released in October of last year (2010) finally added some much needed, if limited, clarification around the use of virtualised environments.

This change / clarification is an addition to section 2.2.1 of the standard, adding the statements;

Note: Where virtualization technologies are in use, implement only one primary function per virtual system component.

And

2.2.1.b If virtualization technologies are used, verify that only one primary function is implemented per virtual system component or device

While this does not clarify how to set up a virtual environment that handles card data to meet PCI-DSS it does at least make it very clear that the use of virtual environments is acceptable and can meet the standard.

This removes the previous confusion around the acceptability of of using virtualisation to host environments dealing with card data that stemmed from the statement in version one of the standard around each server having to have only a single function.  By definition the physical hosts in a virtualised environment host multiple guests (the virtual servers) and thus have multiple functions.

Despite not having as much detail as many had hoped this is a great step forward given the ever increasing adoption of virtualisation to reduce costs and make better use of server hardware.

This has also opened the door to the possibility of using cloud based services to provide a PCI-DSS compliant architecture.  During some recent research into virtual architecture that will meet the requirements of PCI-DSS 2 I came across this work from a combination of companies to provide a reference architecture for PCI-DSS compliance in a cloud based scenario;

http://info.hytrust.com/pci_reference_architecture_x1.html

The above links to both a webinar providing an overview of the work undertaken, and a white paper detailing the actual reference architecture.

The architecture design was undertaken by Cisco, VMWare, Savvis, Coalfire and Hytrust, and while the solution is understandably made up of the products and services offered by those companies, it clearly outlines a solution that you can adapt for your needs and make use of similar solutions that fit with your companies tech stack.  As such this is a highly recommended read for anyone involved in designing or auditing solutions that need to be PCI-DSS compliant.

K

Cheap IOPS, Expensive Gigabytes…

Recently we implemented a fast storage solution to meet the needs of a growing (and horribly non-relational and un-normalised) database.  While actually a very simple solution from an architectural standpoint it was one of those products that performs exactly as advertised and has impressed us immensely with it’s performance, so I wanted to briefly write about it should anyone reading this need a fast server based storage solution.

The products in question are from a company called Fusion-IO, and are called ioDrives, the same drives are also available through HP who re-brand them as Accelerator IO cards.

The title of this post was actually taken from a conversation with one of the guys from Fusion-IO when we were evaluating the performance of their cards.  He highlighted that what they effectively do is ‘make IOPS cheap and Gigabytes expensive’.

IOPS = Input Output operations Per Second – basically how many times the device (hard disk / SSD / Array) can read or write to itself per second.

Reading the performance statistics for the cards provided some very impressive statistics, but obviously we wanted to prove these for ourselves, in our environment with our hardware and simulation of our typical workload (the mix of reads and writes we typically see from the application).

The cards used in our testing and subsequently our production environment are the 640GB MLC cards, details of which can be found here;

http://www.fusionio.com/products/iodriveduo/

We tested the performance in an HO DL580 G7 with 4 * 8 core Xeon CPUs @ 2.26GHz and 128GB RAM, using the SQLIO Disk Subsystem Benchmark Tool from Microsoft.

The results certainly met our expectations, we saw >90,000 IOPs from a mixed read / write configuration via the SQLIO tool in our real world scenario.  This is particularly impressive given the use of very ‘normal’ off the shelf hardware components.

We were comfortable enough with the results that we recommended the use of these cards for a critical production system in a mirrored (RAID 1 across 2 cards) configuration, and as mentioned have since implemented this solution with great success.

It is worth noting that the cards do have a considerable amount of built in resilience and redundancy so not all implementations would require mirrored cards, in fact according to the vendor most implementations are not RAID 1 and they rarely if ever see any issues.

Before signing off this post I should mention my college Ben Cox who is our local DBA extraordinaire as he actually ran the tests and documented the outputs.

K

Architecture in turbulent times – part 2

This post will follow on from the previous ‘Architecture in turbulent times’ post covering some of the shifting demands on architects and further highlighting ways we can add value even during this tougher economic period.

The first thing to do is ensure you understand the shifting demands on the architect, and the business; doing this will ensure you remain at the centre of major IT decisions, either by making or advising on those decisions.

These changing demands may include areas such as;

–         Reducing project costs and doing more with existing or even less resource (as per the previously mentioned increasing efficiencies), with many businesses having a strong focus on reducing or managing Opex (Operating expenses).

–         Maintaining and encouraging talent, this may sound strange in the current environment, but keeping and providing a career path and training for talented employees is key during tough times.  Use this as an opportunity to train, mentor and encourage others in your department.

–         Outsourcing – in line with innovation and cost efficiency, are there repeatable processes that could be outsourced? Do new technologies and services such as Platform as a Service enable the outsourcing of technology to manage / reduce costs?

Use and develop your skills in areas including;

–         Negotiation and inspiration – this will enable you to gain buy in for your vision / plans, and get people motivated to drive changes forwards.

–         Problem solving / issue assessment – the ability to quickly and where required tactically resolve problems is more important than ever, and the architects ability to do this while looking holistically at the bigger picture is where we can add great value.

–         Understanding the business and their processes is as always a key component of the architect’s role.  We love technology, but it is the understanding of the business requirements and the ability to provide the simplest and most cost effective technical solutions to these requirements rather than just technology itself that is critical.

We need to think more tactically while still maintaining a holistic view of our business and the environment in which it trades (e.g. relevant regulatory considerations).  In this way the role of the architect remains key, and will ensure that the current technical solutions meet the business requirements of now around optimisation and simplification while being flexible enough to allow the business to grow and capitalise on any improvements in the economy.

I am thinking of the new focus as being tactically strategic, or strategically tactical! 

K