TOGAF 9.1 course with Architecting-the-enterprise

Last week I spent four days on the TOGAF 9.1 (The Open Group Architecture Framework) training course  presented by ‘Architecting the   Enterprise’ so thought I’d provide a brief review here.

I have been thinking about becoming TOGAF certified for a while as it seems to be becoming a bit of a de-facto standard and requirement for many architecture roles.  Initially I tried to reading the somewhat large and horrifically written TOGAF 9 book.  My advice, don’t..

So I approached the course with some trepidation knowing how dry the material was.  However I was pleasantly surprised!  The course was well presented, and made the material considerably more palatable than I expected based on how the book is written.

The course was split into the same basic sections as the book, covering some enterprise architectural history and overview material, the TOGAF process, the ADM (Architecture Development Method), ADM Guidelines and techniques, the Architecture Content Framework, the Enterprise Continuum and tools (star trek fans much??), TOGAF reference models, and the TOGAF capability framework.

Overall a lot of content was covered, which included everything you need to know in order to understand and utilise the TOGAF principles at work.  All the slides presented were provided on CD, along with a revision / crib book.  As far as I can tell this should be enough to pass the exam – I’ll let you know as soon as I get round to sitting it.

As with most courses of this type, one of the key side benefits is meeting a group of people from different businesses with various views of project and architectural processes.

Regarding TOGAF, the main value for me is an overview of the process and getting to grips with the terminology; using TOGAF as a point of reference ensures architects from various backgrounds and disciplines and all have a frame of reference and common language.

Regarding the course, I’d definitely recommend Architecting the Enterprise and this TOGAF course.  Even if becoming an enterprise architect is not your aim / ambition, you will find parts of TOGAF useful in most enterprises and to most architecture specialisations from business through data to technology.

I’ll provide an update on how the exam goes, likely in a few weeks..



2012 Update

I had meant to update on how my plans for the year were going around June / July so this is a little late, but I have been pretty busy getting the upcoming Cloud Security Alliance (CSA) – Security as a Service (SecaaS) guidance documents.  These are due for publication at the start of September – watch this space..  It has also taken longer than expected to finalise my Masters project choice, but I think I’ve got there with that one, finally!

In January I listed some goals for the year here;

Some 2012 projects / plans

So where am I with the years goals?

1. Choose a project and complete my Masters.  Project finally chosen and extended project proposal handed in.  My proposed project title is;

‘Increasing authentication factors to improve distributed systems security and privacy’

The plan is to cover the current state of distributed systems authentication and to assess how this could be improved by adding further ‘factors’ to the required authentication.  In this instance factors refer to things like ‘something you know’ such as passwords, ‘something you have’ such as a number generating token, and something you are such as your finger print.  I have completed a project plan outlining how I’ll use the time between now and the hand in date in January 2013, and I’ll keep you posted with progress.

2. Lead / co-chair the CSA SecaaS working group.  While it has been challenging to find the time and keep everyone involved working in the same direction, we are almost ready to release the next piece of work from this research group.  The next publication will be in the form of 10 implementation guidance documents covering the 10 SecaaS categories we defined last year.  These will be released on the CSA web site around the end of August, I’ll post a link once they are available.  This has certainly been a learning experience regarding managing the output of a very very diverse set of international volunteers!

3. Become more familiar with the Xen hypervisor.  I have had limited success with this one, increasing my familiarity with virtualisation and cloud generally, and reading up on Xen.  However I have not had a chance to set up a test environment running the open source Xen hypervisor to get properly acquainted with it.  I’ll be looking to rectify this during October, at which time I’ll provide a run down of my thoughts of this hypervisor’s features and how easy it is to install and configure.

4. Brush up my scripting and secure coding.  Scripting opportunities have been limited this year, and I have not had the tine to create side projects outside of the office due to CSA and Masters related work.  Secure coding, I have reviewed both some code and some development practices against OWASP recommendations and the Microsoft secure development lifecycle (SDLC), so have made some progress in this area and will follow with an update in a future post.

Overall, not as much progress in some areas as I had hoped, but I am reasonably happy with the CSA SecaaS and Master progress, while also holding my own in full time employment.

As mentioned, keep an eye out for the upcoming publication of the SecaaS implementation guidance!


Further Cloud planning and BYOD reading

I have recently read a few interesting and useful papers relating to some of my previous posts that may also be of interest to some of the readers of this blog.  Feel free to let me know your thoughts!  Incidentally the first three papers below all originate from IBM, this is purely coincidental and I have no affiliation with IBM.

The first paper is titled ‘Defining a framework for cloud adoption’.  Please read previous posts if you need an overview of the benefits of cloud computing.  This paper introduces IBMs cloud adoption framework that is free for any organisation wishing to have a standardised reference to frame their discussions and planning around moving to the cloud.  This can be found here (free registration may be required);

The second paper worth reviewing is also around helping your company adopt cloud based services, this one is titled; ‘A logical approach to cloud adoption in your company’.  This paper seeks to aid the discussions around when and how to consider moving to the cloud and covers the fact that there isn’t actually ‘a cloud’, but multiple clouds and variations on the theme, these were covered in my previous post introducing the cloud.  This one can be found here (free registration may be required);

The third paper from IBM is titled ‘Building a successful roadmap to the cloud’.  This is a great companion to the above papers, as once you have the conversation started and people are on board with the benefits of utilising some cloud services the next step is to build the plan / roadmap for moving to and adopting these services.  This paper can be found here (free registration may be required);

All three of the above papers are definitely worth reading if your company is considering adopting cloud services, or if you want some ideas and terminology to get the conversation and planning started.

The final paper I’ll suggest you read is a balanced review of BYOD (Bring Your Own Device) that covers many of the pros and cons of this current trend.  I have briefly covered BYOD and what it is before, this paper will aid you in further understanding what BYOD is, what the potential pit falls are, and if BYOD may fit into your business at all.   This one if from PC pro, not IBM just for a bit of a change and can be found here (free registration may be required);

Happy reading, I’ll be back soon with an update on my years progress so far.


What is your current desktop strategy? part 2 – VDI strategy

Following from my previous post I wanted to cover some of the areas / themes that should be included or at least considered when creating your virtual desktop (VDI or vDesktop) strategy.

There are currently a variety of drivers for virtual desktops ensuring that this topic remains one of the key discussion points when ICT departments and C-levels talk about IT strategy.  These drivers range from data security and centralised management to the increasing prevalence of BYOD (Bring your own Device), and are aided by the increasing flexibility and maturity of the technical VDI solutions.  As such, even if you don’t yet plan to implement this technology you should be very aware of it and be formulating your strategy.  If you are already have implemented, or are planning to implement, a VDI solution then you should already have a firm strategy, and vision, in place.  Either way I hope this proves to be a useful reference.

The below list is likely not exhaustive, and includes both very high level strategic considerations, along with some more technical concerns.

1. What are you trying to achieve?

–         Ensure the goals are clearly articulated, such as cost reduction, business enabler, improved security, and centralisation.

2. Clearly define use cases

–         Is VDI critical to achieve these or just one option?

–         Is this a tactical or overall strategic solution?

3. How does this align with other plans / strategies

–         Plans to roll out or upgrade to Windows 7 and 8

–         Plans to enable remote / mobile working

–         Support of BYOD initiatives

4. What is the wider business case / benefit of the strategy?

–         User satisfaction

–         ROI (Return On Investment)

5. What is the endpoint strategy

–         Thick clients

–         Thin Clients

–         Mobile Clients

–         BYOD

–         Do the proposed solutions have clients for all supported endpoints?  Can access be provided via a browser?

–         What are the plans for managing the endpoints?

6. Do the users require the ability to be able to work offline?

7. How will images be managed?

–         Single or multiple images?

–         Maintaining ‘gold’ images?

8. How will profiles be manages?

–         Do users require individual and persistent profiles / workspaces?

–         Can static / mandatory profiles be used in some / all instances?

9. How do currently deployed technologies match up with those required to deploy and manage the VDI solution?

–         Propose transition plans

10. How do current skill sets match up to those required to support and manage the VDI solution?

–         Propose training plans

11. What are the impacts to;

–         Storage

–         Network – LAN / WAN

–         Do these impact cost and business case?

12. Are the vendors being considered suitable partners?

–         Do they design for and target businesses of your size and in your segment

–         Are they healthy financially?

–         Do they have strategic, long term plans?

–         Is there a healthy ‘eco system’ of applications and other vendors around the solution?

13. How available and resilient will the solution be?

–         Resilient infrastructure?

–         Multi-site?

–         Backed up?

14. Scalability and flexibility

–         How does the solution scale?

–         What operating systems do you require it to support?

–         Are 64 as well as 32-bit operating systems supported?

15. What are the licensing implications of virtualising your current operating system and application estate?

16. What are the user and business expectations around areas such as;

–         Multi media

–         3d

–         Audio

–         Telecoms

–         Unified communications

–         Video conferencing

17. Will supporting technologies such as application virtualisation be part of the strategy?

18. How compatibility issues such as requirement for local licensing dongles will be dealt with.

19. …

As a final note, it is a common issue in VDI plans and deployments for organisations to focus on the technology, features, and products in the market without first having a clear vision and defined strategy.

Remember – vision and strategy first for any large programs of work!


What is your current Desktop strategy? part 1 – VDI options compared

If you are currently evaluating or planning to evaluate VDI (Virtual Desktop Infrastructure) solutions for your businesses it can be hard to know where to start, with various vendors currently offering mature solutions that will all meet the majority of businesses VDI requirements.  These include;

– Citrix Xendesktop

– Citrix VDI in a box

– VMware View

– Microsoft VDI

– Quest vWorkspace

When tasked with looking for a VDI solution for your company the first thing you should do, indeed the first thing you should do for most if not all projects, is understand the requirements from the solution.  For something like this that may be adding quite a lot of new functionality and future options to the business, this is likely to incorporate some of the usual solid requirements such as;

–         Number of users

–         Performance and scalability

–         Ease of management

–         Interoperability with existing user and management applications

–         Integration with existing infrastructure

–         …

In addition to the ‘solid’ requirements there will likely be a lot of potential ‘requirements’ that are effectively potential benefits the solution could bring to the business such as;

–         Improved data security

–         Improved resilience of the workstation environment

–         Improved agility of the workstation environment

–         Enabling BYOD

–         Improved productivity

–         Enabling ‘work from anywhere’

–         …

The next thing to do is to assess the various VDI products on the market in order to choose the best one for your environment.  Given the variety of solutions available, some Hypervisor independent, some dependant, some easier to manage and deploy, some with lower costs it can be a daunting and more importantly resource intensive task to assess and test all of the viable options.

This is where the very helpful and impartial ‘VDI smackdown’ from the guys at PQR comes in.  This document is kept reasonably up to date with version 1.3 released earlier this year.  This can be found here;

Note – free registration may be required to download the PDF.

The white paper covers topics including;

–         Desktop virtualisation concepts

–         Pros and cons of VDI (virtual desktop infrastructure)

–         Comparison of the different VDI vendors solutions and their features.

Overall this document is well worth a read if you are planning to embark on a new or upgrade VDI project or indeed if you just wish to learn more about VDI and the features currently available.

An upcoming post will cover some of the areas I think need to be considered when creating you virtual desktop strategy.


Handling perimeter expansion and disintegration

One of the most common themes over the last few years in IT security discussions has been the de-perimiterisation of the corporate network.  The term was originally coined by the Jericho Forum and refers to the greying of the split between the internal trusted network and the wider world.

This is briefly described here;

Traditionally there has been strict demarcation, maintained by devices such as firewalls, between the untrusted outside world, the semi trusted DMZs (De-Militarised Zones), and the trusted internal network.  As more and more business functions require interactions between intenal users and external customers, suppliers, remote users, home workers and other third parties these strict zones of demarcation have become considerably more porous.

This has lead to some people proposing the removal of this network boundary concept and for securing of data and systems to be achieved with encryption, host and network based IPS (Intrusion Prevention Systems), and AV etc.  With the view that data and systems can be kept secure while facilitating easier and more efficient business with customers, partners and other third parties.  Taken to it’s extreme, this is the paradigm of the ‘perimeterless’ network.

If you are faced with dealing with this ever more porous network perimeter while still maintaining the security of the systems you are responsible for, or you just want to read more about how security and this issues raised by the muddying of internal and external network boundaries, Sophos have produced a simple and easy to read guide in their naked security blog titled;

Practical IT: handling perimeter expansion and disintegration

This can be found here;

Have a read, and let me know what you think.  If there is any interest I’ll write a more in depth post on the topic.


Consumerism of IT 2..

Following from my previous post covering briefly what consumerism of IT and Bring Your Own Device (BYOD) are, I’ll now cover some of the things these trend mean for ICT departments.

For any IT business or IT department that thinks they do not need to consider the impacts of consumerism and BYOD – Think again!  Regardless of perceived business benefits such as cost savings or flexibility, or even the side benefits around the improved security and management of utilising VDI to centralise business owned user computing resources, as BYOD becomes more mainstream it will become and expected benefit / perk rather than the exception.

As an example of how this is already becoming more mainstream; several large companies such as IBM and Citrix are embracing this trend and have well established BYOD programs.

Ask yourself, do you want to attract the best talent? If the answer is yes then you need to ensure the working environment you offer is up there with the best of your competitors.  This includes offering things like BYOD programs across mobiles, tablets, laptops etc. and / or offering a wider variety of consumer type devices such as tablets and smartphones.

The challenge, as is often the case, will be to understand how these changes and trends can be harnessed to provide both business benefits and create an attractive working environment while still ensuring the security of your and your customers data and maintaining a stable and manageable ICT estate.

BOYD and consumerism of IT can and will make sweeping changes to how IT departments manage and provision user devices.  Whether this is due to supporting a wider variety of devices directly, or from relinquishing some control and embarking on a BYOD program, there will be changes.  What they are will depend on the route your company takes and how mature your company currently regarding technology such as desktop virtualisation and offering functionality via web services.  If you currently have little or no VDI type solution and most of your application access is via thick or dedicated client software the changes are likely to prove very challenging.  On the other hand, if you are at the other end of the scale with a large and mature VDI (Virtual Desktop Infrastructure) deployment along with most applications and processes being accessed via a browser, then the transition to more consumer or BYOD focussed end user IT will likely be relatively straight forward from a technical standpoint.

Without sounding like a broken record (well hopefully) the first thing you need to do before embarking on any sort of BYOD program is to get the right policies and procedures in place to ensure company data remains safe and that there are clear and agreed rules for how any devices can be used, how they can access data, how access, authentication and authorisation are managed, along with the companies requirements around things like encryption and remote wipe capabilities.

NIST (National Institute of Standards and Technology) have recently released an updated draft policy around the managing and securing mobile devices such as smartphones and tablets.  This policy covers both company owned (Consumerism) and user owned (BYOD) devices.  This can be used as a great starting point for the creation of your own policies.  It’s worth noting that NIST highlights BYOD as being more risky than company owned devices even when the devices are the same.  The policy draft can be found here;

Once you have the policies in place you will need to assess the breadth of the program, this must include areas such as;

–         Will you allow BYOD, or only company supplied and owned equipment

–         Which devices are allowed

–         Which O/Ss and applications are permitted; this should include details of O/S minor versions and patch levels etc.

–         How will patching of devices and applications be managed and monitored

–         What levels of access will the users and devices be permitted

–         What architectural changes are required to the environment in order to manage and support the program

–         How will licenses be managed and accounted for

–         What are the impacts to everything from the network (LAN, WAN and internet access) to applications and storage to desk space (will users have more or less devices on their desks) to the provision of power (will there be more devices and chargers etc. on the floors)

This is by NO means an exhaustive list, the point of these posts is to get you thinking about what is coming along, and whether your company will embrace BYOD and the consumerism of IT. recently ran an article titled ‘7 Tips for Establishing a Successful BYOD Policy’ that covers some similar points and is worth a read;

There are several useful links from the CIO article that are also worth following.

It would be great to hear your thoughts and experiences on the impacts of consumerism and BYOD.


In the cloud contracts are key..

I have mentioned in previous posts that when it comes to moving systems into the cloud, one of the key areas to ensure is covered is that of the contract.  As you move systems to the cloud type model, you as a business or IT department become more and more abstracted from the underlying architecture and rely on the CSP (Cloud Service Provider) to have the architecture covered.

While in many ways this is great as the CSP will have considerably better infrastructure and a larger IT department than you as the customer so not only do you need to worry less about the services that support your systems, they are likely better set up and managed than if you tried to do so in house.

However the downside of this is that you are very much more beholden to contracts and service level agreements.  As such ensuring the you completely understand the terms of the contract you sign with the CSP, including SLAs, where your data is, how it is handled, what levels of performance, scale, DR etc. you are entitled to is critical.

To help with this CloudPro have recently published a couple of articles on what should be in the contract (part 1) and what to look out for in the fine print (part 2).

These can be found at the below URLs;

Part 1 –

Part 2 –


Amazon cloud outage knocks out Netflix Pinterest and Instagram, or does it?

While the report here;

is undoubtedly true and factually correct, in that recent storms caused issues with Amazon’s data centre in Ohio, and previously they have had issues when their data centre in Ireland was damaged by lightening, the question should be what could be done differently, rather than ‘cloud services are not robust / safe.

I am a firm advocate for insuring you understand your contract with your cloud provider with and that you pay great attention to things like SLAs and guaranteed uptime.  This is especially true if you are using SaaS or PaaS type services that may in turn rely on another vendors IaaS service – you need to understand the layers to ensure your provider is not offering SLAs that it cannot meet due to them being more stringent than those of the providers of the services on which it relies.

However I question why this is considered an issue particular to ‘cloud’ based services.  These same issues could happen to any co-location / data centre hosting solution, and these along with many more minor issues are likely to cause disruption to anything you host locally in your server room no matter how grand a name you give it.  Sorry that’s one of my other pet hates, businesses with small server rooms that insist on calling them ‘data centres’ or other grandiose names and talking about them as if they are a large and resilient as actual Data Centres etc.

Anyway, back on topic, obviously when a cloud service provider has an issue it is likely to affect many customers so will be news worth, but before you worry too much or begin to dismiss the idea of moving some or all of your service to the cloud, ask yourself is it likely to be more or less robust than hosting things yourself?

Take the necessary precautions;

-Understand the offering you are purchasing, the SLAs and guaranteed uptime in the contract,

-Build BC and DR into your service; ensure it is replicated to multiple servers and disks locally, and to another geographically disparate data centres and you can host a hugely robust solution in the cloud.


Very simple introduction to cloud computing

I have just had to write the first in what may be a series of articles introducing cloud computing.  This is a very high level and simple overview so may be too simple for most readers of this blog, but I thought I would share it for anyone that is interested or who has to write something similar.  Also worth noting this had to be brief, with a maximum of around 1000 words so is by no means an exhaustive introduction.

Upcoming articles in the series will likely cover what has enabled cloud now, the benefits cloud can bring, and security and regulatory concerns of moving to the cloud.

Introduction to Cloud Computing

Cloud Computing is one of the current buzz terms that everyone, in IT circles at least ,seems to be aware of and talking about.  In this brief introduction I’ll cover off;

–          Just what is cloud computing?

Follow on articles will, assuming there is interest, cover off areas such as;

–          How is cloud computing similar or different from other computing models?

–          What benefits can cloud computing bring?

–          What are the risks / issues?

–          Regulatory and security concerns along with mitigating factors

–          Further details on cloud models and how they could be used by businesses

Please note that as this is a single article of around 800 words, this is a high level and non exhaustive overview.

Just what is Cloud computing?

Cloud computing is in a nutshell the delivery of compute / applications / storage as a service to consumers of the service.  The term cloud comes from the fact a cloud like symbol is often used to denote cloud services as per the diagram above.  The users / consumers of the service at a high level only need to know how to connect to the service, they do not need to have any knowledge of what underpins or provides the service in order to use it.

While the term ‘cloud computing’ may be relatively new, we have all been using cloud based services for many years, the most common example of this being online email services such as Hotmail and Gmail.  Hotmail has been operating as an online email cloud service since the mid 1990s.

Cloud computing is usually split into one of three different ‘flavours’;

  1. Infrastructure as a Service (IaaS)
  2. Platform as a Service (PaaS)
  3. Software as a Service (SaaS)

IaaS is the provision of raw compute power to customers; it is likely that you may even need to install your own operating system onto virtual machines in an IaaS model.  This is the closest model to the traditional co-location type model where the cloud provider basically looks after the servers / storage and all supporting hardware for you and you are still responsible for O/S (Operating System) and application installation and management.  This is the most flexible and configurable cloud service option, but conversely requires the most effort to manage and support from a consumer perspective.  Amazon Web Services would be an example of IaaS.

PaaS is effectively the next level up, this is the provision of platform level services such as databases or application containers.  Here the provider manages and maintains all the underlying hardware, operating system and the container / platform itself.  The customer purchases the required space / power and in the database example would create their own DBs, configure user permissions, table structure etc. and upload the data.   PaaS is less configurable than IaaS, but provides a fully managed platform such as the Database engine so requires less management and effort to set-up / configure than IaaS.  An example of this would be the Microsoft Azure Database and web sites platforms.

SaaS is where software applications are provisioned to customers from a cloud based service, in this instance the entire application is hosted in the ‘cloud’ and the customers access it and only have to manage things like user permissions and some application configuration, the hardware, O/S, platform and application hosting are all managed by the cloud provider.  Examples of this would be the Salesforce application, Microsoft Office 365, and pretty much any online email service.

The different cloud ‘flavours’ in which cloud services can be consumed, as described above, are depicted in the below diagram;

In addition to the IaaS / PaaS / SaaS models you will also likely here the terms Public, Private, Community and Hybrid in relation to cloud services, so what do these terms mean?

–          A Public Cloud is one that is entirely hosted by a vendor on shared hardware / systems.  This does not mean your systems or data are accessible to anyone other than your business, just that they are hosted on shared ‘cloud’ infrastructure.  This is probably the version of cloud computing that most people think of when they hear the cloud term.  Public clouds can be IaaS, PaaS or SaaS.

–          A private cloud can mean one of two things –

  • Usually it refers to a cloud hosted entirely internally by a company.  This is usually only relatively large businesses, and is when someone creates an internal cloud service that is accessible to the whole organisation, but is hosted and run internally and only accessible to that specific company
  • Sometimes vendors may sell ‘private’ cloud services where you can purchase cloud based solutions that are hosted on dedicated and segregated rather than shared systems.  This can end up much more like a more traditional co-location model and may negate some of the benefits you can realise from cloud computing (more on that in a later article)

–          A community cloud is effectively a public cloud that is aimed at a specific community.  In this sense community refers to a group of very similar companies that share a similar way of working.  An example is the community cloud shared by some legal firms.  A community cloud can be thought of as a ‘targeted’ public cloud.

–          A Hybrid cloud solution is one that makes use of both internal resources and external ones.  An example may be an internal system that has capacity for normal use, but has known periods of high usage such as during month or quarter end processing.  Rather than purchase and host enough capacity for the brief periods of high use, the business only hosts pays for and hosts enough capacity for normal use.  During periods of high use the system ‘bursts’ out into the cloud service and makes use of the vendors resource for extra processing power.  In this way you only pay for what you use, when you use it, while not having to migrate entirely to the cloud if the business is not yet ready for that step.

So to summarise, cloud computing can be infrastructure, platform or software provided as a service, it is elastic in that you only pay for what you use and the resource you use can go up and down as required.  Just what has enabled cloud computing now, how is it different from other existing computing models, and what are the potential benefits?  Watch this space..