Service Technology Symposium 2012 – Talks update 3

Cloud computing’s impact on future enterprise architectures 

This talk was fairly light and I didn’t make a huge amount of notes, but thought there were a few points worth noting;

Definitions and boundaries are changing.  Instead of defined boundaries we are used to around traditional architectures whether they are hosted locally or at a data-centre we are moving to much more fluid and interconnected architectures.  Consider personal cloud, private cloud, hybrid cloud, extended virtual data-centres, consumerism, BYOD etc.  The cloud creates different, co-existing architectural environments based on combinations of these models.

Consider why you should move to the cloud, which characteristics are important for your organisation such as;

–          Elastically scalable

–          Self service

–          Measured services

–          Multi-tenancy

–          Virtualised and dynamic

–          Reliability (SLAs, what happens when there are issues etc.)

–          Economic benefits (cost reduction – TCO, and / or better resiliency)

Do you understand any potential risks;

–          What are the security roles and responsibilities? –

  • IaaS – you
  • BPaaS (business process as a service) – Them
  • Sliding scale from IaaS – PaaS – SaaS – BpaaS

–          Where is your data?

  • Your business and regulatory requirements
  • Jurisdictional rules – who can access your data
    • Legal / jurisdictional issues amplified

For me some of this talk was outdated, with a lot of focus on where is your data; While where is my data is a key question, there was too much focus on the fact your data will be anywhere in the world with global CSPs, when most big players now offer guarantees that you data will stay within defined regions if you want it to.

So, what does this mean for your ‘future’ cloud based enterprise architecture principles, concepts etc.?

–          Must standardise on ‘shared nothing’ concept

–          Standardise on loosely coupled services

–          Standardise on ‘separation of concerns’

–          No single points of failures

–          Multiple levels of protection / security

–          Ease of <secure> access to data

–          Security standards to protect data

–          Centralise security policy

–          Delegate or federate access controls

–          Security and wider design patterns that are easy to adopt and work with the cloud

Combining these different architectural styles is a huge challenge.

Summary – Dealing with multiple architectures, multiple dimensions and multiple risks is a key challenge to integrating cloud  into your environment / architecture!

The slides from this talk can be downloaded here;

http://www.servicetechsymposium.com/dl/presentations/cloud_computings_impact_on_future_enterprise_architectures.pdf

———————

SOA (Service Orientated Architecture) environments are a big data problem / Big data and its impact on SOA

Outside of some product marketing for Splunk, the premise of these two talks was basically the same, that large SOA environments are complex, need a lot of monitoring and create a lot of data.

Splunk is incidentally is a great open source product for log monitoring / data collection, aggregation and analysis / correlation.  Find out more about it here; http://www.splunk.com/

SOA – great for agility, but can be complex – BPEL, ebXML, WSDL, SOAP, ESB, XML, BPM, UDDI, Composition, loose coupling, orchestration, data services, business processes, XML Schema, registry  etc..  This can generate a huge amount of disparate data that needs to be analysed in order to understand the system.  Both machine and generated data may need to be aggregated.

SOA based systems can themselves generate big data!

How do we define big data?

–          Volume – large

–          Velocity – high

–          Variety – complex (txt, files, media, machine data)

–          Value – variable signal to nose ratio

We all know large web based enterprises such as Google and Facebook etc. have to deal with big data, but should you care?  Many enterprises are now having to understand and deal with big data for example;

  • Retail and web transaction data
  • Sensor data
    • GPS in phones
    • RFITS
    • NFC
    • SmartMeters
    • Etc.
  • Log file monitoring and analysis
  • Security monitoring

The talks had the following conclusions;

–          Big data has reached the enterprise

–          SOA platforms are evolving to leverage big data

–          Service developers need to understand how to insert and access data in Hadoop

–          Time-critical conditions can be detected as data inserted in Hadoop using event processing techniques – Fast Data

–          Expect big data and fast data to become ubiquitous in SOA environments – much like RDBMS are already.

So I’d suggest you become familiar with what big data is, the tools that can be used to handle and manage it such as Hadoop, MapReduce and PIG (these are relatively big topics in themselves and may be covered at a later date)

The slides from these talks can be downloaded from the below locations;

http://www.servicetechsymposium.com/dl/presentations/soa_environment_are_a_big_data_problem.pdf

http://www.servicetechsymposium.com/dl/presentations/big_data_and_its_impact_on_soa.pdf

—————-

Time for delivery; Developing successful business plans for cloud computing projects 

This talk covered some great points around areas to consider when planning cloud based projects.  I’ll capture as much as I managed to make notes on, as there was a lot of content for this one.  I’d definitely recommend checking out the slides!

Initial things to consider include;

–          Defining the link between your business ecosystem and the available types of cloud-enabled technologies

–          Identifying the right criteria for a ‘cloud fit’ in your organisation. (operating model and business model fit)

–          Strategies and techniques for developing a successful roadmap for the delivery of cloud related cost savings and growth.

Consider the outside-in approach ( http://en.wikipedia.org/wiki/Outside%E2%80%93in_software_development ) which is enabled by four of the current game changing capabilities / trends;

–          Mobility – any connection, any device, any service

–          Social Tools – any community, any media, any person

–          Cloud – computing resources, apps and services, on demand

–          Big Data – real time information and intelligence

In a nice link with the talk on HPC in the cloud, this one also highlighted the competitive step change that cloud potentially is; small companies can have big company levels of infrastructure, scalability, growth etc.  Anyone can access enterprise levels of computational power.

Cloud computing can be used to drive a cost cutting / management strategy and a growth / agility strategy.

Consider your portfolio and plans – what do you want to achieve in the next 6 months, next 12 months etc.

When looking at the cloud and moving to it, what are the benefit cases and success measures for your business?  These should be clearly defined and agreed in order for you to both plan correctly, and clearly understand if the project / migration has been a success.

What is your business model, and which cloud service business models will best fit with this?  What is the monetization strategy for your cloud migration project; Operational, Growth, Channel etc.  Initially cloud based projects are often driven by cost saving aspirations, however longer term benefits will likely be better if the drivers are better and faster, cost benefits (or at least higher profits!) will follow.  To be successful, you must decide and be clear on your strategy!

As with all projects, consider your buy vs. build options.

Consider also;

Is IT a commodity or something you can instil with IP?  Depending on your business you will be at different places on the continuum.  Most businesses can and should derive competitive advantage by putting their skills and knowledge into their IT systems rather than using purely SaaS or COTS solutions without at least some customisation.  This of course may only be true for systems relating to your key business, not necessarily supporting and administrative systems.

Cloud computing touches many strategies – you need a complete life-cycle 360 approach.

–          Storage strategy

–          Compute strategy

–          Next gen network strategy

–          Data centre strategy

–          Collaboration strategy

–          Security strategy

–          Presence strategy

–          Application / development strategy

–          Etc.

Consider the maturity of your services and their roadmap to the cloud;

Service Management – Service integration – Service Aggregation – Service Orchestration

This talk highlights just how much there is to think about when planning to migrate to, or make use or, the cloud and cloud based services.

The talk also highlighted a couple of interesting things to consider;

Look up ‘The Eight Fallacies of Distributed Computing’ from 1993, and ‘Brewer’s Theorem’ from 2000 (published in 2002) to understand how much things have stayed the same just as much as how much they have changed!

https://blogs.oracle.com/jag/resource/Fallacies.html

http://en.wikipedia.org/wiki/CAP_theorem

Also consider your rate of innovation – How can you speed up your / your businesses rate of innovation?

The slides from this talk can be downloaded from here;

http://www.servicetechsymposium.com/dl/presentations/time_for_delivery_developing_successful_business_plans_for_cloud_computing_projects.pdf

K

Service Technology Symposium 2012 – Talks update 1

Building Cloudy Services;

The main premise of this talk was that you need to understand the cloud paradigm when designing services that you plan to run in the cloud.  Everything you do in the cloud costs, minimise unnecessary actions and transactions.

Why is the cloud an attractive solution? – Cloud computing characteristics..

–          Uses shared and dynamic infrastructure

–          Elastic and scalable  (horizontally NOT vertically!)

–          On demand as a service (self-service)

–          Meters consumption

–          Available across common networks

Features you should consider for any services that will be hosted in the cloud; where + indicates patterns / beneficial designs, indicates ‘anti-patterns / designs that will be more challenging to run successfully in the cloud;

Failure aware;

–          Motivation – Hardware will fail, software will fail, people will make mistakes

–          + stateless services, redundancy, idempotent operations

–          stateful services, single points of failure

Event driven;

–          Motivation – No busy waiting, less synchronisation

–          + everything is a call-back, autonomous services

–          – anti patterns – chatty synced interactions, guaranteed latency

Parallelizable;

–          Motivation – horizontally scalable

–          + stateless, workload decomposition, REST/WOA (not SOAP)

–          SPOB (single point of bottleneck), synchronous interactions

Automated;

–          Motivation – Easily recreated and installed (automated provisioning and scaling)

–          + re-startable, template driven

–          complex dependencies, hardware affinity

Consumption awareness;

–          Motivation- Efficient resource usage

–          + fine grained modular design, multi-tenancy

–          Monolithic design, single occupancy

The talk concluded with the following recommendations;

–          Stop treating cloud like a generic shipping container – be cloud aware

–          Match your goals for cloud computing to essential characteristics

–          Promote patterns among development team (paralysation etc)

–          Hunt down anti-patterns in code reviews

–          Evaluate IaaS and PaaS providers based on their support for cloud aware patterns

–          Balance the patterns

Keeping these points in mind should help ensure the services and designs you migrate to the cloud have a better chance of success.

PDF of the presentation can be found here;

http://www.servicetechsymposium.com/dl/presentations/building_cloudy_services.pdf

———————————-

High Performance Computing in the Cloud

This talk was one of my favourites, and somethign I find very interesting.  Traditionally High Performance Computing (HPC) has been the preserve of large corporations, reasearch depatements or governements.  This is due to the size and complexity of computing environemtns required in order to perform HPC.  With the advent of HPC in the cloud access to this level of compute resource is becoming much more widespread.  Both the cost of entry and expertise requried to set up this type of environment are lowering dramatically.  Cloud service providers are setting up both tradtional CPU based HPC offerings, and the newer, potentially vastly more powerful, GPU (Graphics Processor) based HPC offerings.

Onto the talk;

Cloud HPC can bring HPC levels of computational power to normal businesses for things like month / year level processing, and risk calculating etc.

In order to think about how you can use HPC, look to nature for inspiration – longest chain – how small a pieces can a process be broken down into in order to parallelise it?

–          Traditional HPC – message passing (MPI), head node and multiple compute nodes, backed by shared storage.  Scale issues – storage performance (use expensive bits)

–          Newer HPC, more ‘Hadoop’ type model, data stored on compute (worker) nodes – they then just send back their results to the master node(s).

  • Look at things like hive and pig that sit atop hadoop. More difficult to set up than MPI.

–          Newest HPC – GPU – simpler cores, but many of them.

  • CPU – ~10 cores maximum.  CPU – hundreds cores (maybe thousands). 
  • Some super computers looking at 1000’s GPUs in a single computer.
  • 4.5 teraflop graphics card < $2000!!

Cloud scale vs. on premise –

–          On premise = measured by rack at a time. 

–          Cloud = lorry trailers added by simply plugging in network, cooling and power then turning on, left until enough bits fail, then returned to manufacturer..

Cloud = Focused effort! – Cloud power managed by CSP, researchers work.. No need for huge amount of local infrastructure.

How to move to the cloud, largely as with other stuff –

–          Go all in – pure cloud. –

  • MPI cluster – just have images of head and compute nodes – scale out.  10 node cluster hosted on amazon made top 500 computer list with minimal effort in setup / config.
  • Platform as a service – e.g. Apache Hadoop – based services for windows azure – just go through how big you want the cluster through the web interface – has excel interface already so excel can directly use this cluster for complex calculations!

–          Go hybrid – add compute nodes from the cloud to existing HPC solution (consider – latency issues, and security issues (e.g. VPN to the cloud)).

You really don’t care about how the technology works.  Only how it helps you work!

Dan Rosanova who gave the talk has an excellent Blog post with some metrics around HPC in the loud here;  http://Danrosanova.wordpress.com/hpc-in-the-cloud

The slides from this talk can be downloaded here;

http://www.servicetechsymposium.com/dl/presentations/high_performance_computing_in_the_cloud.pdf

Final note – GPU development is currently mostly proprietary and platform specific. Microsoft is pushing their proposed open standard that treats CPU and CPU as ‘accelerators’ it does abstraction at run time rather than compile time.  This would allow much greater standardisation of HPC development as it abstracts the code from the underlying processing architecture.

These are exciting times in the HPC world and I’d expect to see a lot more people / companies / research groups making use of this type of computing in the near future!

K