Service Technology Symposium 2012 – Talks update 1

Building Cloudy Services;

The main premise of this talk was that you need to understand the cloud paradigm when designing services that you plan to run in the cloud.  Everything you do in the cloud costs, minimise unnecessary actions and transactions.

Why is the cloud an attractive solution? – Cloud computing characteristics..

–          Uses shared and dynamic infrastructure

–          Elastic and scalable  (horizontally NOT vertically!)

–          On demand as a service (self-service)

–          Meters consumption

–          Available across common networks

Features you should consider for any services that will be hosted in the cloud; where + indicates patterns / beneficial designs, indicates ‘anti-patterns / designs that will be more challenging to run successfully in the cloud;

Failure aware;

–          Motivation – Hardware will fail, software will fail, people will make mistakes

–          + stateless services, redundancy, idempotent operations

–          stateful services, single points of failure

Event driven;

–          Motivation – No busy waiting, less synchronisation

–          + everything is a call-back, autonomous services

–          – anti patterns – chatty synced interactions, guaranteed latency

Parallelizable;

–          Motivation – horizontally scalable

–          + stateless, workload decomposition, REST/WOA (not SOAP)

–          SPOB (single point of bottleneck), synchronous interactions

Automated;

–          Motivation – Easily recreated and installed (automated provisioning and scaling)

–          + re-startable, template driven

–          complex dependencies, hardware affinity

Consumption awareness;

–          Motivation- Efficient resource usage

–          + fine grained modular design, multi-tenancy

–          Monolithic design, single occupancy

The talk concluded with the following recommendations;

–          Stop treating cloud like a generic shipping container – be cloud aware

–          Match your goals for cloud computing to essential characteristics

–          Promote patterns among development team (paralysation etc)

–          Hunt down anti-patterns in code reviews

–          Evaluate IaaS and PaaS providers based on their support for cloud aware patterns

–          Balance the patterns

Keeping these points in mind should help ensure the services and designs you migrate to the cloud have a better chance of success.

PDF of the presentation can be found here;

http://www.servicetechsymposium.com/dl/presentations/building_cloudy_services.pdf

———————————-

High Performance Computing in the Cloud

This talk was one of my favourites, and somethign I find very interesting.  Traditionally High Performance Computing (HPC) has been the preserve of large corporations, reasearch depatements or governements.  This is due to the size and complexity of computing environemtns required in order to perform HPC.  With the advent of HPC in the cloud access to this level of compute resource is becoming much more widespread.  Both the cost of entry and expertise requried to set up this type of environment are lowering dramatically.  Cloud service providers are setting up both tradtional CPU based HPC offerings, and the newer, potentially vastly more powerful, GPU (Graphics Processor) based HPC offerings.

Onto the talk;

Cloud HPC can bring HPC levels of computational power to normal businesses for things like month / year level processing, and risk calculating etc.

In order to think about how you can use HPC, look to nature for inspiration – longest chain – how small a pieces can a process be broken down into in order to parallelise it?

–          Traditional HPC – message passing (MPI), head node and multiple compute nodes, backed by shared storage.  Scale issues – storage performance (use expensive bits)

–          Newer HPC, more ‘Hadoop’ type model, data stored on compute (worker) nodes – they then just send back their results to the master node(s).

  • Look at things like hive and pig that sit atop hadoop. More difficult to set up than MPI.

–          Newest HPC – GPU – simpler cores, but many of them.

  • CPU – ~10 cores maximum.  CPU – hundreds cores (maybe thousands). 
  • Some super computers looking at 1000’s GPUs in a single computer.
  • 4.5 teraflop graphics card < $2000!!

Cloud scale vs. on premise –

–          On premise = measured by rack at a time. 

–          Cloud = lorry trailers added by simply plugging in network, cooling and power then turning on, left until enough bits fail, then returned to manufacturer..

Cloud = Focused effort! – Cloud power managed by CSP, researchers work.. No need for huge amount of local infrastructure.

How to move to the cloud, largely as with other stuff –

–          Go all in – pure cloud. –

  • MPI cluster – just have images of head and compute nodes – scale out.  10 node cluster hosted on amazon made top 500 computer list with minimal effort in setup / config.
  • Platform as a service – e.g. Apache Hadoop – based services for windows azure – just go through how big you want the cluster through the web interface – has excel interface already so excel can directly use this cluster for complex calculations!

–          Go hybrid – add compute nodes from the cloud to existing HPC solution (consider – latency issues, and security issues (e.g. VPN to the cloud)).

You really don’t care about how the technology works.  Only how it helps you work!

Dan Rosanova who gave the talk has an excellent Blog post with some metrics around HPC in the loud here;  http://Danrosanova.wordpress.com/hpc-in-the-cloud

The slides from this talk can be downloaded here;

http://www.servicetechsymposium.com/dl/presentations/high_performance_computing_in_the_cloud.pdf

Final note – GPU development is currently mostly proprietary and platform specific. Microsoft is pushing their proposed open standard that treats CPU and CPU as ‘accelerators’ it does abstraction at run time rather than compile time.  This would allow much greater standardisation of HPC development as it abstracts the code from the underlying processing architecture.

These are exciting times in the HPC world and I’d expect to see a lot more people / companies / research groups making use of this type of computing in the near future!

K