Service Technology Symposium Day 2..

Today was the second day of the Service Technology Symposium.  As with yesterday I’ll use this post to review the keynote speeches and provide an overview of that day.  Where relevant further posts will follow, providing more details on some of the days talks.

As with the first day, the day started well with three interesting keynote speeches.

The first keynote was from the US FAA (Federal Aviation Administration) and was titled ‘SOA, Cloud and Services in the FAA airspace system’.  The talk covered the program that is under-way to simplify the very complex National Airspace System (NAS).  This is the ‘system of systems’ that manages all flights in the US and ensures the control and safety of all the planes and passengers.

The existing system is typical of many legacy systems.  It is complex, all point to point connections, hard to maintain, and even minor changes require large regression testing.

Thus a simplification program has been created to deliver SOA, web centric decoupled architecture.  To give an idea of the scale, this program is in two phases with phase one already largely delivered yet the program is scheduled to run through 2025!

as mentioned, the program is split into two segments to deliver capabilities and get buy in from the wider FAA.

–          Segment 1- implemented set of federated services, some messaging and SOA concepts, but no common infrastructure.

–          Segment 2 – common infrastructure – more agile, project effectively creating a message bus for the whole system.

The project team was aided by the creation of a Wiki, and COTS (commercial off the shelf) software repository.

They have also been asked to assess the cloud – there is a presidential directive to ‘do’ cloud computing.  They are performing a benefits analysis from operational to strategic.

Key considerations are that cloud must not compromise NAS,  and that security is paramount.

The cloud strategy is defined, and they are in the process of developing recommendations.  It is likely that the first systems to move to the cloud will be supporting and administrative systems, not key command and control systems.

The second keynote was about cloud interoperability and came from the Open Group.  Much of this was taken up with who the Open Group are and what they do.  Have a look at their website if you want to know more;

http://www.opengroup.org/

Outside of this, the main message of the talk was the need for improved interoperability between different cloud providers.  This would make it easier to host systems across vendors and also the ability of customers to change providers.

As a result improved interoperability would also aid wider cloud adoption – Interoperability is one of the keys to the success of the cloud!

The third keynote was titled ‘The API economy is here: Facebook, Twitter, Netflix and YOUR IT enterprise’.

API refers to Application Programming Interface, and a good description of what this refers to can be found on Wikipedia here;

http://en.wikipedia.org/wiki/Application_programming_interface

The focus of this keynote was that making APIs public and by making use of public APIs businesses can help drive innovation.

Web 2.0 – lots of technical innovation led to web 2.0, this then led to and enabled human innovation, via the game changer that is OPEN API.  Reusable components that can be used / accessed / built on by anyone.  Then add the massive, always on user base of smartphone users into the mix with more power in your pocket than needed to put Apollo on the moon.  The opportunity to capitalise on open APIs is huge.  As an example, there are currently over 1.1 million distinct apps across the various app stores!

Questions for you to consider;

1. How do you unlock human innovation in your business ecosystem?

–          Unlock the innovation of your employees – How can they innovate and be motivated?  How can they engage with the human API?

–          Unlock the potential of your business partner or channel sales community; e.g. Amazon web services – merchants produce, provide and fulfil goods orders, amazon provides the framework to enable this.

–          Unlock the potential of your customers; e.g. IFTTT  (If This Then That) who have put workflow in front of many of the available APIs on the internet.

2. How to expand and enhance your business ecosystem?

–          Control syndication of brand – e.g. facebook ‘like’ button – everyone knows what this is, every user has to use the same standard like button.

–          Expand breadth of system – e.g. Netflix  used to just be website video on demand, now available on many platforms – consoles, mobile, tablet, smart TV, PC etc.

–          Standardise experience – e.g. kindle or Netflix – can watch or read on one device, stop and pick up from the same place on another device.

–          Use APIs to create ‘gravity’ to attract customers to your service by integrating with services they already use – e.g. travel aggregation sites.

This one was a great talk with some useful thought points on how you can enhance your business through the use of open APIs.

On this day I fitted in 6 talks and one no show.

These were;

Talk 1 – Cloud computing’s impact on future enterprise architectures.  Some interesting points, but a bit stuck in the past with a lot of focus on ‘your data could be anywhere’ when most vendors now provide consumers the ability to ensure their data remains in a specific geographical region.  I wont be prioritising writing this one up so it may or may not appear in a future post.

Talk 2 – Using the cloud in the Enterprise Architecture.  This one should have been titled the Open Group and TOGAF with 5 minutes of cloud related comment at the end.  Another one that likely does not warrant a full write up.

Talk 3 – SOA environments are a big data problem.  This was a brief talk but with some interesting points around managing log files, using Splunk and ‘big data.  There will be a small write up on this one.

Talk 4 – Industry orientated cloud architecture (IOCA).  This talk covered the work Fulcrum have done with universities to standardise on their architectures and messaging systems to improve inter university communication and collaboration.  This was mostly marketing for the Fulcrum work and there wasn’t a lot of detail, this is unlikely to be written up further.

Talk 5  – Time for delivery: Developing successful business plans for cloud computing projects.  This was a great talk with a lot of useful content.  It was given by a Cap Gemini director so I expected it to be good.  There will definitely be a write up of this one.

Talk 6 – Big data and its impact on SOA.  This was another good, but fairly brief one, will get a short write up, possibly combined with Talk 3.

And there you have it that is the overview of day two of the conference.  Looks like I have several posts to write covering the more interesting talks from the two days!

As a conclusion, would I recommend this conference?  Its a definite maybe.  Some of the content was very good, some either too thin, or completely focussed on advertising a business or organisation.  The organisation was also terrible with 3 talks I planned to attend not happening and the audience totally left hanging rather than being informed the speaker hadn’t arrived.

So a mixed bag, which is a shame as there were some very good parts, and I managed to get 2 free books as well!

Stay tuned for some more detailed write ups.

K

Service Technology Symposium Day 1..

So yesterday was day one of the Service Technology Symposium.  This is a two day event covering various topics relating to cloud adoption, cloud architecture, SOA (Service Orientated Architecture) and big data.  As mentioned in my last post my focus has mostly been on the cloud and architecture related talks.

I’ll use this post to provide a high level overview of the day and talks I attended, further posts will dive more deeply into some of the topics covered.

The day started well with three interesting keynotes.

The first was from Gartner covering the impact of moving to the cloud and using SOA on architecture / design.  The main points of this talk were understanding the need to move to a decoupled architecture to get the most from any move to the cloud.  This was illustrated via the Any to Any to Any architecture paradigm where this is;

Any Device – Any Service – Any Data

Gartner identified a ‘nexus of forces’ driving this need to decouple system component;

–          Mobile – 24/7, personal, context aware, real time, consumer style

–          Social – Activity streams, Personal intelligence, group sourcing, group acting

–          Information – variety, velocity, volume, complexity

–          Cloud services

In order to achieve this, the following assumptions must be true; All components independent and autonomous, they can live anywhere (on premise or in cloud), applications must be decoupled from services and data.

They also highlighted the need for a deep understanding of the SOA principles.

The second keynote speech was from the European Space Agency on their journey from legacy applications and development practices to SOA this was titled ‘Vision to reality; SOA in space’.

They highlighted 4 drivers for their journey; Federation – Interoperability – Alignment to changing business needs / requirements (agility) – Reduce time and cost.

And identified realising these drivers using SOA, and standards as outlined below;

Federation – SOA, Standards

Interoperability – SOA, Standards

Alignment to business needs – SOA, Top Down and Bottom up

Reduce costs – Reuse; SOA, Incremental development

Overall this was an interesting talk and highlighted a real world success story for SOA in a very complex environment.

The third keynote was from NASA Earth Science Data Systems.  This provided an overview of their use of SOA, the cloud and semantic web technologies to aid their handling of ‘big data’ and complex calculations.  They have ended up with a globally diverse hybrid cloud solution.

As a result of their journey to their current architecture they found various things worthy of highlighting as considerations for anyone looking to move to the cloud;

–          Understand the long term costs of cloud storage (cloud more expensive for their needs and data volumes)

–          Computational performance needed for science – understand your computational needs and how they will be met

–          Data movement to and within the cloud – Data ingest, data distribution – how will your data get to and from the cloud and move within the cloud?

–          Process migration – moving processes geographically closer to the data

–          Consider hybrid cloud infrastructures, rather than pure cloud or pure on premises

–          Security –  always a consideration, they have worked with Amazon GovCloud to meet their requirements

To aid their move to SOA and the cloud, NASA created various working groups – such as – Data Stewardship, Interoperability, semantic technologies, standards, processes etc.

This has been successful for them so far, and currently NASA Earth Sciences make wide use of SOA, Semantic technologies and the cloud (esp. for big data).

The day then moved to 7 separate track of talks which turned out for me to be somewhat of a mixed bag.

Talk 1 was titled ‘Introducing the cloud computing design patterns catalogue’.  This is a relatively new project to create re-usable deign patterns for moving applications and systems to the cloud.  The project can be found here;

www.cloudpatterns.org

Unfortunately the intended speaker did not arrive so the talk was just a high level run through the site.  The project does look interesting and I’d recommend you take a look if you are involved in creating cloud based architectures.

The second talk was supposed to be ‘A cloud on-boarding strategy’ however the speaker did not turn up, and the organisers had no idea if he was coming or not so wasted a lot of peoples time.  While it’s outside of the organisers control if someone arrives or not, they should have been aware the speaker had not registered and let us know rather than the 45 minutes of is he, isn’t he, we just have no idea that ensued..

The third talk was supposed to be ‘developing successful business plans for cloud computing projects’.  This was again cancelled due to the speaker not arriving.

Talk 2 (talks numbered by my attendance) was a Gartner talk titled ‘Building Cloudy Services’.  This was an interesting talk that I’ll cover in more depth in a following post.

Talks three to five were also all interesting and will be covered in some more depth in their own posts.  They had the below titles;

Talk 3 was titled ‘HPC in the cloud’

Talk 4 was titled ‘Your security guy knows nothing’

Talk 5 was titled ‘Moving applications to the cloud’

The final talk of the day was titled ‘Integration, are you ready?’  This was however a somewhat misleading title.  This talk was from a cloud ESB vendor and was basically just an advertisement for their product and how great it was for integration. not generally about integration.  Not what you expect from a paid for event.  I’ll not mention their name other than to say they seem to have been inspired by a piece of peer to peer software.. Disappointing.

Overall, despite some organisational hiccups and a lack of vetting of at least one vendors presentation, day one was informative and interesting.  Look out for more detailed follow up posts over the next few days.

K

ISEB Enterprise and Solutions Architecture – update

Following from my previous post I can confirm that the exam was pretty easy having got a pretty reasonable passing mark after completing the exam in ~25 minutes.

I have yet to see many job specs that require this certification so I don’t know how CV enhancing it really is.  However many job specs want knowledge of or familiarity with architecture frameworks such as TOGAF and Zachman, if you are not already fairly familiar with these then this course does provide a good overview and comparison of some frameworks.

Overall my assessment of the course / exam is as before – I think well worth while from the point of view of getting an overview of various architecture frameworks and the terminologies used, as well as meeting people from a variety of business backgrounds.  This should assist with any requirement for knowledge of architecture frameworks / methodologies your current or future roles have.  The caveat in terms of career value is that the certification itself seems to be in very low demand.

K

Partitioning Architecture for Simplicity

One of the key learnings from the book ‘Simple Architectures for Complex Enterprises’ is around partitioning of functions in order to simplify the overall architecture.

Partitioning in this sense is a mathematical term for splitting up the area in question into sections (partitions) such that any item is in one and only one partition, and that every item in a given partition shares the feature or features that define membership of that partition.

An example would be dividing store stock by price e.g. items at £5, items at £10 and items at £15.  All items in the items at £5 partition would share the property of costing £5 and be found in only the £5 partition and no other partitions.

Partitions can be chosen arbitrarily as per the above example (which may be a valid way for a store to partition its good for stock taking or whatever), but in when talking about technical architectures for a business more thought needs to go into how to partition up the architecture to enable simplification, but also to create valid and usable partitions.

Within an architecture it transpires that the best way (or one of the best ways) to partition a system is to look at the functionality of components and then assess whether a components functionality is autonomous with regards to other components or synergistic.  E.g. do the components in question have a mutual dependency on each other (if so they have a synergistic relationship), if they do not have this mutual dependency they can be considered autonomous of each other.

By separating an enterprise architecture up in this way we end up with a group of partitions where all components in each partition have synergistic relationships to each other, but components in separate partitions are autonomous in relation to each other.

In addition to being likely the best way to partition an architecture for simplicity while maintaining a manageable number of partitions, this method also translates well to the use of Service Orientated Architectures (SOA).

K

ISSAP – Information Systems Security Architecture Professional

So, I recently received confirmation from the ISC2 (International Information Systems Security Certification Consortium) that I passed the ISSAP exam.   This is a secure architecture concentration in addition to the CISSP (Certified Information Systems Security Professional) certification.

While I believe this should be a worthwhile addition to my CISSP and of course my CV, while also helping progress my current role, I felt I should write a post about my preparation for the exam.

As with the CISSP (Certified Information Systems Security Professional) the best way to be prepared is to have a solid grounding in the subject matter – e.g. IT security and technical / solutions architecture.  Indeed several years of industry experience is a prerequisite for obtaining these certifications.

Also as with the CISSP I chose to cover off the bulk of the revision by using the ISC2 recommended course text.  With the CISSP I used the well regarded Shon Harris ‘CISSP all in one guide’ that was well written and very comprehensive.

For the ISSAP I used the ISC2 Official study guide to the CISSP-ISSAP.  Currently this is the only book specifically for the ISSAP exam that claims to cover all aspects of the exam.  Personally I found this book to be very badly written and hard to read.  The first chapter must have used the phrase ‘Confidentiality Integrity Availability’ in almost every sentence, yes we all know that CIA is important and what we are aiming for but there is no need to repeat it so often.

Other sections of the book only skimmed over areas that were quite heavily covered in the exam.

In short if you did not already have a very solid grounding and experience in the areas covered by the exam, this official guide would not be anywhere near enough to pass the exam.  Obviously the ISC2 may argue that you are supposed to have industry experience, but this does not necessarily include all the areas covered in the exam such as specific components of the common body of knowledge or other specific standards.

If you are a CISSP involved in designing secure architectures then this certainly seems like a worthwhile certification to go for.  I would advise doing some supplementary reading covering the Common Body of Knowledge and something like ‘Enterprise Security Architecture’ along with of course a solid background in both security and architecture.

As an aside I am a firm believer that study and / or involvement in IT related work such as creating white papers, contributing to open source etc. is a great way to not only improve your skills and knowledge, but also essential to show current and future employers that you are genuinely passionate about what you do rather than it just being a job.

K

Simplicity

In preparation for doing the Enterprise and Solution Architecture course and exam from the Chartered Institute for IT I have started reading the book ‘Simple Architectures for Complex Enterprises’ by Roger Sessions from Microsoft press.

While this is primarily a book focused on solution and enterprise architecture, the main point it focuses on is the often overlooked one of simplicity.  The basic premise is that the most simple solution that meets the requirements is the best solution.

The vast majority of IT projects, even those that appear relatively straight forward run over time or over budget, or both.  This is despite rigorous project processes (e.g. SDLC), and well understood architectural frameworks (e.g. Zachman, TOGAF).

The reason for this is that none of the project processes or architectural frameworks directly address complexity.  They do provide much needed rigour around how projects are run, and ensure that architects and the business can talk the same language via the use of agreed frameworks, both of which add great value, but neither of which prevents unnecessary complexity from creeping into solutions.

In addition to both increased cost and time to delivery overly complex solutions are much harder to;

Maintain – which component is causing the issue when trouble shooting? Will patching one part impact others?

Secure – Simple solutions are easy to test and make secure from the outset, complex solutions are likely insecure from the outset and near impossible to fully understand and secure further down the line.

A further post will follow covering some of the techniques outlined in the book around understanding complexity and eliminating it from solutions to even complex problems.

In the mean time, keep it simple and remember just because your business or the problem you are trying to solve is complex that does not mean the solution needs to be complicated!

K

PCI-DSS Virtualisation Guidance

In what was obviously a response to my recent blog post stating
more detailed guidance would be helpful (yes I am that influential!) the ‘PCI
Security Standards Council Virtualisation Special Interest Group’ have just
released the ‘PCI DSS Virtualisation Guidelines’ Information Supplement.

This can be found here;

https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf

This is a welcome addition to the PCI-DSS as it makes the
requirements for handling card data in a virtual environment much more clear.
The use of the recommendations in this document along with the reference
architecture linked to in my previous post will provide a solid basis for
designing PCI-DSS compliant virtual environment.

The document itself is in 3 main sections. These comprise;

– ‘Virtualisation Overview’ which outlines the various components
of a virtual environment such as hosts, hypervisor, guests etc. and under what
circumstances they become in scope of the PCI-DSS

– ‘Risks for Virtualised Environments’ outlines the key risks
associated with keeping data safe in a virtual environment including the
increased attack surface or having a hypervisor, multiple functions per system,
in memory data potentially being saved to disk, Guests of different trust
levels on the same host etc. along with procedural issues such as a potential
lack of separation of duties.

– ‘Recommendations’; This section is the meat of the document that
will be of main interest to most of the audience as it details the PCI’s recommended
actions and best practices to meet the DSS requirements. This is split into 4
sections;

– General –
Covering broad topics such as evaluating risk, understanding the standard,
restricting physical access, defence in depth, hardening etc.   There is also a recommendation to review other guidance such as that from NIST (National Institute of Standards Technology), SANS (SysAdmin Audit Network Security) etc. – this is generally
good advice for any situation where a solid understanding of how to secure a
system is required.

– Recommendations for Mixed Mode Environments –

This is a key section for most businesses as the reality for most of us is that being able to run a mixed mode environment, (where guests in scope of PCI-DSS and guests not hosting card data are able to reside on the same hosts and virtual environment via acceptable logical separation), are the best option in order to gain the maximum benefits from virtualisation.  This section is rather shorter than expected with little detail other than many warnings about how difficult true separation can be.  On a bright note it does clearly
say that as long as separation of PCI-DSS guests and none PCI-DSS guests can be configured and I would imagine audited then this mode of operating is permitted.  Thus by separating the Virtual networks and segregating the guests into separate resource pools, along with the use of virtual IPS appliances and likely some sort of auditing (e.g. a netflow monitoring tool) it should be very possible to meet the DSS requirements in a mixed mode virtual environment.

– Recommendations for Cloud Computing Environments –

This section outlines various cloud scenarios such as Public / Private / Hybrid along with the different service offerings such as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service).  Overall it is highlighted that in many cloud scenarios it may not be possible to meet PCI-DSS requirements due to the complexities around understanding where the data resides at all times and multi tenancy etc.

– Guidance for Assessing Risks in Virtual Environments –

This is a brief section outlining areas to consider when performing a risk assessment, these are fairly standard and include Defining the environment, Identifying threats and vulnerabilities.

Overall this is a useful step forward for the PCI-DSS as it clearly shows that the PCI are moving with the times and understanding that the use of virtual environments can indeed be secure providing it is well managed, correctly configured and audited.

If you want to make use of virtualisation for the benefits of consolidation, resilience and management etc. and your environment handles card data this along with the aforementioned reference architecture should be high on your reading list.

K