Service Technology Symposium 2012 – Talks update 3

Cloud computing’s impact on future enterprise architectures 

This talk was fairly light and I didn’t make a huge amount of notes, but thought there were a few points worth noting;

Definitions and boundaries are changing.  Instead of defined boundaries we are used to around traditional architectures whether they are hosted locally or at a data-centre we are moving to much more fluid and interconnected architectures.  Consider personal cloud, private cloud, hybrid cloud, extended virtual data-centres, consumerism, BYOD etc.  The cloud creates different, co-existing architectural environments based on combinations of these models.

Consider why you should move to the cloud, which characteristics are important for your organisation such as;

–          Elastically scalable

–          Self service

–          Measured services

–          Multi-tenancy

–          Virtualised and dynamic

–          Reliability (SLAs, what happens when there are issues etc.)

–          Economic benefits (cost reduction – TCO, and / or better resiliency)

Do you understand any potential risks;

–          What are the security roles and responsibilities? –

  • IaaS – you
  • BPaaS (business process as a service) – Them
  • Sliding scale from IaaS – PaaS – SaaS – BpaaS

–          Where is your data?

  • Your business and regulatory requirements
  • Jurisdictional rules – who can access your data
    • Legal / jurisdictional issues amplified

For me some of this talk was outdated, with a lot of focus on where is your data; While where is my data is a key question, there was too much focus on the fact your data will be anywhere in the world with global CSPs, when most big players now offer guarantees that you data will stay within defined regions if you want it to.

So, what does this mean for your ‘future’ cloud based enterprise architecture principles, concepts etc.?

–          Must standardise on ‘shared nothing’ concept

–          Standardise on loosely coupled services

–          Standardise on ‘separation of concerns’

–          No single points of failures

–          Multiple levels of protection / security

–          Ease of <secure> access to data

–          Security standards to protect data

–          Centralise security policy

–          Delegate or federate access controls

–          Security and wider design patterns that are easy to adopt and work with the cloud

Combining these different architectural styles is a huge challenge.

Summary – Dealing with multiple architectures, multiple dimensions and multiple risks is a key challenge to integrating cloud  into your environment / architecture!

The slides from this talk can be downloaded here;

http://www.servicetechsymposium.com/dl/presentations/cloud_computings_impact_on_future_enterprise_architectures.pdf

———————

SOA (Service Orientated Architecture) environments are a big data problem / Big data and its impact on SOA

Outside of some product marketing for Splunk, the premise of these two talks was basically the same, that large SOA environments are complex, need a lot of monitoring and create a lot of data.

Splunk is incidentally is a great open source product for log monitoring / data collection, aggregation and analysis / correlation.  Find out more about it here; http://www.splunk.com/

SOA – great for agility, but can be complex – BPEL, ebXML, WSDL, SOAP, ESB, XML, BPM, UDDI, Composition, loose coupling, orchestration, data services, business processes, XML Schema, registry  etc..  This can generate a huge amount of disparate data that needs to be analysed in order to understand the system.  Both machine and generated data may need to be aggregated.

SOA based systems can themselves generate big data!

How do we define big data?

–          Volume – large

–          Velocity – high

–          Variety – complex (txt, files, media, machine data)

–          Value – variable signal to nose ratio

We all know large web based enterprises such as Google and Facebook etc. have to deal with big data, but should you care?  Many enterprises are now having to understand and deal with big data for example;

  • Retail and web transaction data
  • Sensor data
    • GPS in phones
    • RFITS
    • NFC
    • SmartMeters
    • Etc.
  • Log file monitoring and analysis
  • Security monitoring

The talks had the following conclusions;

–          Big data has reached the enterprise

–          SOA platforms are evolving to leverage big data

–          Service developers need to understand how to insert and access data in Hadoop

–          Time-critical conditions can be detected as data inserted in Hadoop using event processing techniques – Fast Data

–          Expect big data and fast data to become ubiquitous in SOA environments – much like RDBMS are already.

So I’d suggest you become familiar with what big data is, the tools that can be used to handle and manage it such as Hadoop, MapReduce and PIG (these are relatively big topics in themselves and may be covered at a later date)

The slides from these talks can be downloaded from the below locations;

http://www.servicetechsymposium.com/dl/presentations/soa_environment_are_a_big_data_problem.pdf

http://www.servicetechsymposium.com/dl/presentations/big_data_and_its_impact_on_soa.pdf

—————-

Time for delivery; Developing successful business plans for cloud computing projects 

This talk covered some great points around areas to consider when planning cloud based projects.  I’ll capture as much as I managed to make notes on, as there was a lot of content for this one.  I’d definitely recommend checking out the slides!

Initial things to consider include;

–          Defining the link between your business ecosystem and the available types of cloud-enabled technologies

–          Identifying the right criteria for a ‘cloud fit’ in your organisation. (operating model and business model fit)

–          Strategies and techniques for developing a successful roadmap for the delivery of cloud related cost savings and growth.

Consider the outside-in approach ( http://en.wikipedia.org/wiki/Outside%E2%80%93in_software_development ) which is enabled by four of the current game changing capabilities / trends;

–          Mobility – any connection, any device, any service

–          Social Tools – any community, any media, any person

–          Cloud – computing resources, apps and services, on demand

–          Big Data – real time information and intelligence

In a nice link with the talk on HPC in the cloud, this one also highlighted the competitive step change that cloud potentially is; small companies can have big company levels of infrastructure, scalability, growth etc.  Anyone can access enterprise levels of computational power.

Cloud computing can be used to drive a cost cutting / management strategy and a growth / agility strategy.

Consider your portfolio and plans – what do you want to achieve in the next 6 months, next 12 months etc.

When looking at the cloud and moving to it, what are the benefit cases and success measures for your business?  These should be clearly defined and agreed in order for you to both plan correctly, and clearly understand if the project / migration has been a success.

What is your business model, and which cloud service business models will best fit with this?  What is the monetization strategy for your cloud migration project; Operational, Growth, Channel etc.  Initially cloud based projects are often driven by cost saving aspirations, however longer term benefits will likely be better if the drivers are better and faster, cost benefits (or at least higher profits!) will follow.  To be successful, you must decide and be clear on your strategy!

As with all projects, consider your buy vs. build options.

Consider also;

Is IT a commodity or something you can instil with IP?  Depending on your business you will be at different places on the continuum.  Most businesses can and should derive competitive advantage by putting their skills and knowledge into their IT systems rather than using purely SaaS or COTS solutions without at least some customisation.  This of course may only be true for systems relating to your key business, not necessarily supporting and administrative systems.

Cloud computing touches many strategies – you need a complete life-cycle 360 approach.

–          Storage strategy

–          Compute strategy

–          Next gen network strategy

–          Data centre strategy

–          Collaboration strategy

–          Security strategy

–          Presence strategy

–          Application / development strategy

–          Etc.

Consider the maturity of your services and their roadmap to the cloud;

Service Management – Service integration – Service Aggregation – Service Orchestration

This talk highlights just how much there is to think about when planning to migrate to, or make use or, the cloud and cloud based services.

The talk also highlighted a couple of interesting things to consider;

Look up ‘The Eight Fallacies of Distributed Computing’ from 1993, and ‘Brewer’s Theorem’ from 2000 (published in 2002) to understand how much things have stayed the same just as much as how much they have changed!

https://blogs.oracle.com/jag/resource/Fallacies.html

http://en.wikipedia.org/wiki/CAP_theorem

Also consider your rate of innovation – How can you speed up your / your businesses rate of innovation?

The slides from this talk can be downloaded from here;

http://www.servicetechsymposium.com/dl/presentations/time_for_delivery_developing_successful_business_plans_for_cloud_computing_projects.pdf

K

Service Technology Symposium 2012 – Talks update 2

Your security guy knows nothing

This talk focused on the changes to security / security mindsets required by the move to cloud hosted or hybrid architectures.  The title was mainly as an attention grabber, but the talk overall was interesting and made some good points around what is changing, but also the many concerns that are still basically the same.

Security 1.0

–          Fat guy with keys; IT focused; “You can’t do that”; Does not understand software development.

Security 2.0

–          Processes and gates; Tools and people; Good for Building; Not as good for acquiring / mashing

Traditional security wants certainty –

–          Where is the data? – in transit, at rest, and in use.

–          Who is the user?

–          Where are our threats?

What happens to data on hard drives of commodity nodes when the node crashes or the container is shipped back to the manufacturer from the CSP?  (data at rest etc.).  The new world is more about flexible controls and polices than some of the traditional, absolute certainties.

Security guys want to manage and understand change;

–          Change control process

–          Risk Management

–          Alerts when things change that affect the risk profile

Whole lifecycle – security considered from requirements onwards, not tacked onto end of process..  This for me is a key point for all security functions and all businesses.  If you want security to be ingrained in the business, effective, and seen as an enabler of doing things right rather than a blocker at the end, it must always be incorporated into the whole lifecycle.

Doing it right – Business –Development – Security – Working together..

Business;

–          Render the Implicit Explicit

  • Assets
  • Entitlements
  • Goals
  • Controls
  • Assumptions

Development;

–          Include security in design

  • Even in acquisition
  • Even in mash ups

–          Include security in requirements / use cases

–          Identify technical risks

–          Map technical risks to business risks (quantify in money where possible)

–          Trace test cases

  • Not just to features
  • But also to risks (non functional requirements!)

Security;

–          Provide fodder (think differently, black hat / hacker thinking)

–          Provide alternative reasoning

–          Provide black hat mentality

–          Learn to say “yes”

–          Provide solutions, not limitations!

Goal – Risk management

Identify how the business is affected?

–          Reputation

–          Revenue

–          Compliance

–          Agility

What can techies bring to the table?

–          Estimates of technical impact

–          Plausible scenarios

–          Black Hat thinking

 Compliance – does not equal – Security!

–          Ticking boxes – does not equal – Security!

So the key take away points from this are that regardless of the changes to what is being deployed –

 – Work together

– Involve security early

– Security must get better at saying ‘yes, here’s how to do it securely’ rather than ‘no’

No PDF of this presentation is currently available.

————————

Moving applications to the cloud

This was another Gartner presentation that covered some thoughts and considerations when looking at moving existing applications / services to the cloud.

Questions;

–          What are our options?

–          Can we port as is, or do we have to tune for the cloud (how much work involved?)

–          Which applications / functions do we move to the cloud?

Choices;

–          Which vendor?

–           IaaS, PaaS, SaaS…?

–          How – rehost – refactor – revise – rebuild – replace – which one?

  • Rehost or replace most common, quickest and likely cheapest / easiest

You need to have a structured approach to cloud migrations, likely incorporating the following 3  stages;

–          Identify candidate apps and data

  • Application and data-portfolio management
  • Apps and data rationalisation
  • Legacy modernisation

–          Assess suitability

  • Based on cloud strategy goals
  • Define an assessment framework
    • Risk, business case, constraints, principles

–          Select migration option

  • rehost – refactor – revise – rebuild – replace

This should all be in the context of;

–          What is the organisations cloud adoption strategy

–          What is the application worth? What does it cost?

–          Do we need to modernise the application? How much are we willing to spend?

In order to make decisions around what to move to the cloud and how to move it you should define both your migration goals and priorities which should include areas such as;

–          Gain Agility

  • Rapid time to market
  • Deliver new capabilities
  • Support new channels (e.g. Mobile)

–          Manage costs

  • Preserve capital
  • Avoid operational expenses
  • Leverage existing investments

–          Manage resources

  • Free up data centre space
  • Support scalability
  • Gain operational efficiencies

Some examples of what we mean by rehost / refactor / revise / rebuild / replace;

Rehost – Migrating application – rehost on IaaS

Refactor – onto PaaS – make changes to work with the PaaS platform and leverage PaaS platform features

Revise – onto IaaS or PaaS – at least make more cloud aware for IaaS, make more cloud and platform aware for PaaS

Rebuild – Rebuild on PaaS – start from scratch to create new, optimised application.

Note – some of these (rebuild definitely, refactor sometimes) will require data to be migrated to new format.

Replace – with SaaS – easy in terms of code, data migration, business process and applications will change (large resistance from users is possible).

The presentation ended with the following recommendations;

–          Define a cloud migration strategy

–          Establish goals and priorities

–          Identify candidates based on portfolio management

–          Develop assessment framework

–          Select migration options using a structured decision approach

–          Be cognizant of technical debt (time to market more important than quality / elegant code!)

  • Do organisations ever plan to pay back ‘technical debt’?  Where Technical debt refers to corner cutting / substandard development that is initially accepted to meet cost / time constraints.

A pdf of this presentation can be downloaded from here;

http://www.servicetechsymposium.com/dl/presentations/moving_applications_to_the_cloud-migration_options.pdf

Overall another good presentation with very sensible recommendations covering areas to consider when planning to migrate applications and services to the cloud.

K

Service Technology Symposium 2012 – Talks update 1

Building Cloudy Services;

The main premise of this talk was that you need to understand the cloud paradigm when designing services that you plan to run in the cloud.  Everything you do in the cloud costs, minimise unnecessary actions and transactions.

Why is the cloud an attractive solution? – Cloud computing characteristics..

–          Uses shared and dynamic infrastructure

–          Elastic and scalable  (horizontally NOT vertically!)

–          On demand as a service (self-service)

–          Meters consumption

–          Available across common networks

Features you should consider for any services that will be hosted in the cloud; where + indicates patterns / beneficial designs, indicates ‘anti-patterns / designs that will be more challenging to run successfully in the cloud;

Failure aware;

–          Motivation – Hardware will fail, software will fail, people will make mistakes

–          + stateless services, redundancy, idempotent operations

–          stateful services, single points of failure

Event driven;

–          Motivation – No busy waiting, less synchronisation

–          + everything is a call-back, autonomous services

–          – anti patterns – chatty synced interactions, guaranteed latency

Parallelizable;

–          Motivation – horizontally scalable

–          + stateless, workload decomposition, REST/WOA (not SOAP)

–          SPOB (single point of bottleneck), synchronous interactions

Automated;

–          Motivation – Easily recreated and installed (automated provisioning and scaling)

–          + re-startable, template driven

–          complex dependencies, hardware affinity

Consumption awareness;

–          Motivation- Efficient resource usage

–          + fine grained modular design, multi-tenancy

–          Monolithic design, single occupancy

The talk concluded with the following recommendations;

–          Stop treating cloud like a generic shipping container – be cloud aware

–          Match your goals for cloud computing to essential characteristics

–          Promote patterns among development team (paralysation etc)

–          Hunt down anti-patterns in code reviews

–          Evaluate IaaS and PaaS providers based on their support for cloud aware patterns

–          Balance the patterns

Keeping these points in mind should help ensure the services and designs you migrate to the cloud have a better chance of success.

PDF of the presentation can be found here;

http://www.servicetechsymposium.com/dl/presentations/building_cloudy_services.pdf

———————————-

High Performance Computing in the Cloud

This talk was one of my favourites, and somethign I find very interesting.  Traditionally High Performance Computing (HPC) has been the preserve of large corporations, reasearch depatements or governements.  This is due to the size and complexity of computing environemtns required in order to perform HPC.  With the advent of HPC in the cloud access to this level of compute resource is becoming much more widespread.  Both the cost of entry and expertise requried to set up this type of environment are lowering dramatically.  Cloud service providers are setting up both tradtional CPU based HPC offerings, and the newer, potentially vastly more powerful, GPU (Graphics Processor) based HPC offerings.

Onto the talk;

Cloud HPC can bring HPC levels of computational power to normal businesses for things like month / year level processing, and risk calculating etc.

In order to think about how you can use HPC, look to nature for inspiration – longest chain – how small a pieces can a process be broken down into in order to parallelise it?

–          Traditional HPC – message passing (MPI), head node and multiple compute nodes, backed by shared storage.  Scale issues – storage performance (use expensive bits)

–          Newer HPC, more ‘Hadoop’ type model, data stored on compute (worker) nodes – they then just send back their results to the master node(s).

  • Look at things like hive and pig that sit atop hadoop. More difficult to set up than MPI.

–          Newest HPC – GPU – simpler cores, but many of them.

  • CPU – ~10 cores maximum.  CPU – hundreds cores (maybe thousands). 
  • Some super computers looking at 1000’s GPUs in a single computer.
  • 4.5 teraflop graphics card < $2000!!

Cloud scale vs. on premise –

–          On premise = measured by rack at a time. 

–          Cloud = lorry trailers added by simply plugging in network, cooling and power then turning on, left until enough bits fail, then returned to manufacturer..

Cloud = Focused effort! – Cloud power managed by CSP, researchers work.. No need for huge amount of local infrastructure.

How to move to the cloud, largely as with other stuff –

–          Go all in – pure cloud. –

  • MPI cluster – just have images of head and compute nodes – scale out.  10 node cluster hosted on amazon made top 500 computer list with minimal effort in setup / config.
  • Platform as a service – e.g. Apache Hadoop – based services for windows azure – just go through how big you want the cluster through the web interface – has excel interface already so excel can directly use this cluster for complex calculations!

–          Go hybrid – add compute nodes from the cloud to existing HPC solution (consider – latency issues, and security issues (e.g. VPN to the cloud)).

You really don’t care about how the technology works.  Only how it helps you work!

Dan Rosanova who gave the talk has an excellent Blog post with some metrics around HPC in the loud here;  http://Danrosanova.wordpress.com/hpc-in-the-cloud

The slides from this talk can be downloaded here;

http://www.servicetechsymposium.com/dl/presentations/high_performance_computing_in_the_cloud.pdf

Final note – GPU development is currently mostly proprietary and platform specific. Microsoft is pushing their proposed open standard that treats CPU and CPU as ‘accelerators’ it does abstraction at run time rather than compile time.  This would allow much greater standardisation of HPC development as it abstracts the code from the underlying processing architecture.

These are exciting times in the HPC world and I’d expect to see a lot more people / companies / research groups making use of this type of computing in the near future!

K

Service Technology Symposium Day 1..

So yesterday was day one of the Service Technology Symposium.  This is a two day event covering various topics relating to cloud adoption, cloud architecture, SOA (Service Orientated Architecture) and big data.  As mentioned in my last post my focus has mostly been on the cloud and architecture related talks.

I’ll use this post to provide a high level overview of the day and talks I attended, further posts will dive more deeply into some of the topics covered.

The day started well with three interesting keynotes.

The first was from Gartner covering the impact of moving to the cloud and using SOA on architecture / design.  The main points of this talk were understanding the need to move to a decoupled architecture to get the most from any move to the cloud.  This was illustrated via the Any to Any to Any architecture paradigm where this is;

Any Device – Any Service – Any Data

Gartner identified a ‘nexus of forces’ driving this need to decouple system component;

–          Mobile – 24/7, personal, context aware, real time, consumer style

–          Social – Activity streams, Personal intelligence, group sourcing, group acting

–          Information – variety, velocity, volume, complexity

–          Cloud services

In order to achieve this, the following assumptions must be true; All components independent and autonomous, they can live anywhere (on premise or in cloud), applications must be decoupled from services and data.

They also highlighted the need for a deep understanding of the SOA principles.

The second keynote speech was from the European Space Agency on their journey from legacy applications and development practices to SOA this was titled ‘Vision to reality; SOA in space’.

They highlighted 4 drivers for their journey; Federation – Interoperability – Alignment to changing business needs / requirements (agility) – Reduce time and cost.

And identified realising these drivers using SOA, and standards as outlined below;

Federation – SOA, Standards

Interoperability – SOA, Standards

Alignment to business needs – SOA, Top Down and Bottom up

Reduce costs – Reuse; SOA, Incremental development

Overall this was an interesting talk and highlighted a real world success story for SOA in a very complex environment.

The third keynote was from NASA Earth Science Data Systems.  This provided an overview of their use of SOA, the cloud and semantic web technologies to aid their handling of ‘big data’ and complex calculations.  They have ended up with a globally diverse hybrid cloud solution.

As a result of their journey to their current architecture they found various things worthy of highlighting as considerations for anyone looking to move to the cloud;

–          Understand the long term costs of cloud storage (cloud more expensive for their needs and data volumes)

–          Computational performance needed for science – understand your computational needs and how they will be met

–          Data movement to and within the cloud – Data ingest, data distribution – how will your data get to and from the cloud and move within the cloud?

–          Process migration – moving processes geographically closer to the data

–          Consider hybrid cloud infrastructures, rather than pure cloud or pure on premises

–          Security –  always a consideration, they have worked with Amazon GovCloud to meet their requirements

To aid their move to SOA and the cloud, NASA created various working groups – such as – Data Stewardship, Interoperability, semantic technologies, standards, processes etc.

This has been successful for them so far, and currently NASA Earth Sciences make wide use of SOA, Semantic technologies and the cloud (esp. for big data).

The day then moved to 7 separate track of talks which turned out for me to be somewhat of a mixed bag.

Talk 1 was titled ‘Introducing the cloud computing design patterns catalogue’.  This is a relatively new project to create re-usable deign patterns for moving applications and systems to the cloud.  The project can be found here;

www.cloudpatterns.org

Unfortunately the intended speaker did not arrive so the talk was just a high level run through the site.  The project does look interesting and I’d recommend you take a look if you are involved in creating cloud based architectures.

The second talk was supposed to be ‘A cloud on-boarding strategy’ however the speaker did not turn up, and the organisers had no idea if he was coming or not so wasted a lot of peoples time.  While it’s outside of the organisers control if someone arrives or not, they should have been aware the speaker had not registered and let us know rather than the 45 minutes of is he, isn’t he, we just have no idea that ensued..

The third talk was supposed to be ‘developing successful business plans for cloud computing projects’.  This was again cancelled due to the speaker not arriving.

Talk 2 (talks numbered by my attendance) was a Gartner talk titled ‘Building Cloudy Services’.  This was an interesting talk that I’ll cover in more depth in a following post.

Talks three to five were also all interesting and will be covered in some more depth in their own posts.  They had the below titles;

Talk 3 was titled ‘HPC in the cloud’

Talk 4 was titled ‘Your security guy knows nothing’

Talk 5 was titled ‘Moving applications to the cloud’

The final talk of the day was titled ‘Integration, are you ready?’  This was however a somewhat misleading title.  This talk was from a cloud ESB vendor and was basically just an advertisement for their product and how great it was for integration. not generally about integration.  Not what you expect from a paid for event.  I’ll not mention their name other than to say they seem to have been inspired by a piece of peer to peer software.. Disappointing.

Overall, despite some organisational hiccups and a lack of vetting of at least one vendors presentation, day one was informative and interesting.  Look out for more detailed follow up posts over the next few days.

K

TOGAF 9.1 course with Architecting-the-enterprise

Last week I spent four days on the TOGAF 9.1 (The Open Group Architecture Framework) training course  presented by ‘Architecting the   Enterprise’ so thought I’d provide a brief review here.

I have been thinking about becoming TOGAF certified for a while as it seems to be becoming a bit of a de-facto standard and requirement for many architecture roles.  Initially I tried to reading the somewhat large and horrifically written TOGAF 9 book.  My advice, don’t..

So I approached the course with some trepidation knowing how dry the material was.  However I was pleasantly surprised!  The course was well presented, and made the material considerably more palatable than I expected based on how the book is written.

The course was split into the same basic sections as the book, covering some enterprise architectural history and overview material, the TOGAF process, the ADM (Architecture Development Method), ADM Guidelines and techniques, the Architecture Content Framework, the Enterprise Continuum and tools (star trek fans much??), TOGAF reference models, and the TOGAF capability framework.

Overall a lot of content was covered, which included everything you need to know in order to understand and utilise the TOGAF principles at work.  All the slides presented were provided on CD, along with a revision / crib book.  As far as I can tell this should be enough to pass the exam – I’ll let you know as soon as I get round to sitting it.

As with most courses of this type, one of the key side benefits is meeting a group of people from different businesses with various views of project and architectural processes.

Regarding TOGAF, the main value for me is an overview of the process and getting to grips with the terminology; using TOGAF as a point of reference ensures architects from various backgrounds and disciplines and all have a frame of reference and common language.

Regarding the course, I’d definitely recommend Architecting the Enterprise and this TOGAF course.  Even if becoming an enterprise architect is not your aim / ambition, you will find parts of TOGAF useful in most enterprises and to most architecture specialisations from business through data to technology.

I’ll provide an update on how the exam goes, likely in a few weeks..

K

 

2012 Update

I had meant to update on how my plans for the year were going around June / July so this is a little late, but I have been pretty busy getting the upcoming Cloud Security Alliance (CSA) – Security as a Service (SecaaS) guidance documents.  These are due for publication at the start of September – watch this space..  It has also taken longer than expected to finalise my Masters project choice, but I think I’ve got there with that one, finally!

In January I listed some goals for the year here;

Some 2012 projects / plans

So where am I with the years goals?

1. Choose a project and complete my Masters.  Project finally chosen and extended project proposal handed in.  My proposed project title is;

‘Increasing authentication factors to improve distributed systems security and privacy’

The plan is to cover the current state of distributed systems authentication and to assess how this could be improved by adding further ‘factors’ to the required authentication.  In this instance factors refer to things like ‘something you know’ such as passwords, ‘something you have’ such as a number generating token, and something you are such as your finger print.  I have completed a project plan outlining how I’ll use the time between now and the hand in date in January 2013, and I’ll keep you posted with progress.

2. Lead / co-chair the CSA SecaaS working group.  While it has been challenging to find the time and keep everyone involved working in the same direction, we are almost ready to release the next piece of work from this research group.  The next publication will be in the form of 10 implementation guidance documents covering the 10 SecaaS categories we defined last year.  These will be released on the CSA web site around the end of August, I’ll post a link once they are available.  This has certainly been a learning experience regarding managing the output of a very very diverse set of international volunteers!

3. Become more familiar with the Xen hypervisor.  I have had limited success with this one, increasing my familiarity with virtualisation and cloud generally, and reading up on Xen.  However I have not had a chance to set up a test environment running the open source Xen hypervisor to get properly acquainted with it.  I’ll be looking to rectify this during October, at which time I’ll provide a run down of my thoughts of this hypervisor’s features and how easy it is to install and configure.

4. Brush up my scripting and secure coding.  Scripting opportunities have been limited this year, and I have not had the tine to create side projects outside of the office due to CSA and Masters related work.  Secure coding, I have reviewed both some code and some development practices against OWASP recommendations and the Microsoft secure development lifecycle (SDLC), so have made some progress in this area and will follow with an update in a future post.

Overall, not as much progress in some areas as I had hoped, but I am reasonably happy with the CSA SecaaS and Master progress, while also holding my own in full time employment.

As mentioned, keep an eye out for the upcoming publication of the SecaaS implementation guidance!

K

Consumerism of IT..

I have recently been asked a few times, by multiple companies, for my thoughts on the trend for consumerism of IT, and more importantly what it means for IT departments.  This is likely due to consumerism being up there as one of what seem to be the top three buzz terms at the moment;

– Cloud

– Consumerism of IT

– BYOD (Bring Your Own Device)

Putting cloud to one side for a moment as I like to cover that separately, consumerism of IT and BYOD are to me very linked so let’s discuss them both together.

First I’ll briefly cover what consumerism and BYOD are, then in a subsequent post I’ll give my thoughts on their current and future impacts on IT (or ICT as is now becoming the more common term) departments.

What is Consumerism of IT?

–         Consumerism of IT is concerned with the blurring of the lines between consumer and business IT devices.  Obvious examples include smartphones that can easily provide access to both personal and work emails from a single device, and tablet PCs such as the iPad that can be used for viewing and updating business presentations and emails along with consuming media and accessing the internet as a personal device.  The fact that devices like these have been driving change in the business world via their use as consumer devices is leading to the consumerism of IT.

What is BYOD?

–         BYOD refers to the moves of some businesses / IT departments to allow users to bring their own equipment such as a laptop rather than using company owned laptops.  As an example; this is often part of a program where the company would provide a budget for the staff to purchase a laptop, with certain rules such as 3 year extended support must be bought, the staff would then be able to use the laptop as both their own personal device and as their business laptop.  This can also often applies to other devices such as tablets and most commonly phones / smartphones.

While technically the two things can be taken in isolation it is the consumerism that aids BYOD in many circumstances – if smartphones couldn’t easily sync to business and personal email systems at the same time there would be limited desire from users to make use of a BYOD phone policy. However this ability enables users to carry a single rather than multiple phones so has obvious benefits to them while also offering business benefits such as lower costs and reduced management overhead.

K