I was at a PETRAS IoT (Internet of Things) event recently and a question I was asked at lunchtime got me thinking.
The question was;
“Do you think cloud is secure”
My response quite obviously was that the question needed a lot more context. Which cloud? In what sense? Secure enough for what? Etc. etc.
We are falling into the same trap of thinking of IoT as a ‘thing’. All IoT devices may share some traits, in the same way as the are certain traits a hosted service must have for it to be called a cloud service.
However all IoT devices clearly cannot and should not be lumped into one big category.
As my interest is in security I’ll use that as an example.
Consider the level of security required around a simple consumer device like a lightbulb. It may have a few capabilities like on / off / dim and potentially being able to purchase one replacement lightbulb to your address. You may also want some features in place to prevent actually logging onto it other than to perform on / off stuff, and to prevent it from enumerating your home network.
Now consider the security required around a medical device such as a pacemaker or insulin provider for a diabetic.. A while ago someone demonstrated they could hack a Bluetooth insulin device and make it release all of it’s insulin at once. Obviously this was done while the device was not connected to a person!
In the above examples, as long as there are some sensible rules in place, the threat vector from the lightbulb is very limited, and the value to criminals is effectively zero.
However in the healthcare example, an security issue could lead to immediate risk to life – imagine the scenario of pay xx bit coins or I affect your insulin supply, or stop your pacemaker.. – Thus demonstrating not only risk to life, but also a clear avenue to profit for the criminal.
We 100% need to work to improve the security and manageability of IoT devices across the board. However we need to start segmenting this into different sectors and levels of threat / risk / value.
This will allow sensible dialogue about what is appropriate for different circumstances. It is likely this will allow faster and appropriately secure progress.
For example if a framework for security and risk management of consumer devices such as lights, fridges, toasters etc. could likely be arrived at. This would allow progress to be made in this space to provide consumers wider benefits from IoT, but without being mired in wider conversations about what is appropriate for healthcare or transport IoT etc.
So this post has two points;
When something is massive and wide ranging such as cloud or IoT, it is fine to use this as a concept but we need to stop talking about them as a single thing when we think about security etc. as there is not a single solution or set of requirements.
IoT – we need to define distinct, but not too narrow, use cases, e.g. healthcare, consumer, transport etc. Following this we can agree sensible and appropriate frameworks and requirements for things like security, management, payments..
I’ve been mulling over a high level concept for securing IoT payments and the consumer space, that I’ll flesh out and share in an upcoming post. It would be great to hear your thoughts on this and how we can best manage / secure the various types and use cases of IoT.
This was a panel discussion session so flowed around quite a bit, and wasn’t always focussed. The below covers most of the main points that were discussed;
Focus no longer on China.
Focus more on what enterprises can do to protect data and work with their customers securely.
Snowden affair, and global information security / assurance – living in a globally surveyed world.
I’ve been following the Snowden debacle in the news;
Is this something we need to pay attention to?
Tell me three key actions we need to take.
US has the ‘right’ to monitor all network traffic that goes via it or US companies from ‘foreigners’. Doesn’t sound to bad until you realise we are nearly all foreigners (around 97% of the global population isn’t American!). This has huge ramifications.
Snowden affair – nearly all the leaks from this have been of ‘Top Secret’ classification, this hardly ever happens, most leaks are of much lower classification.
However – Remember, just because we are looking at the NSA, China has not gone away. Remembering this is critical to your security posture.
Everything is stored forever! Whether NSA or Google, or other email / search service, all your emails etc are likely stored forever, and probably in several places.
On the opposite side, many industries are rightly moving to more openness and sharing data with more people and the right people
Other nations likely better then the US at sharing the findings of their industrial espionage with national companies – French and Japanese apparently very good at sharing espionage data with companies based in those countries. NSA surveillance may be pervasive, but questions about how much it shares. Board members and CEOs need to be aware that this espionage is a reality.
Supply chain security is a key factor to consider.
Emerging economies have a huge security impact – what they are doing with us, and how we interact and integrate with them.
International treaties around how intelligence agencies work abroad around monitoring each other are needed and being worked on. In democratic countries at least – no comment on what is happening in dictatorships such as Russia and China.
Outsourcing data to third parties for processing etc. has been going on for years such as through the use of mainframes. Cloud services are not a new concept, however the accessibility of these services to many people and the accessibility of the data in them has been a dramatic change.
Encrypting data if you own the process end to end can ensure data is securely stored. Doesn’t really help with processing in the cloud.
Who reads the full terms and conditions of the services they use? How much security and privacy are we inadvertently giving up?
We must not confuse Security and Privacy – these are different things.
The internet is a global platform, do you think it will become more balkanised?
It was set up by the military, and now they want it back 😉
It is already there on many layers – who makes the kit it runs on? Which governments have access to the data or any controls over the data flows?
Governments ignored the internet for years, now they all want some control over it, and government agencies all want to monitor and spy on the data on the internet.
There is a ‘war’ around who controls the internet occurring right now.
The internet and technology are changing very fast, nations / governments are struggling to keep up.
Cloud – is it new or isn’t it?
Yes and no.
Concept of sharing compute resource and allowing users or companies access to compute resource they couldn’t otherwise afford is not new.
Concept of data being anywhere / everywhere, and access to cloud compute and storage is new and the game changer that cloud is advertised to be.
Creates many issues
Where is your data?
Who controls your data?
What about international interception / access laws and capabilities?
Cost and scale benefits driving use in many businesses
How do you best secure this use case?
How do you ensure only the right ‘stuff’ gets into the cloud?
Do you have the right policies in place?
Do you have the right knowledge and skill sets for secure cloud use?
Vet staff and people in key positions both in your business and the cloud provider.
Encrypt your data – this is true, but I have serious issues around this one based on what sort of processing is required – can Tokenisation or Homomorphic encryption be leveraged? What other ways do you have to mitigate the risk of data being unencrypted for processing?
Cloud is an innovator – gives businesses more opportunities, and also gives us new area to learn to secure.
Be proactive – be ready for the cloud, go to the business rather than them coming to you.
I will be leading an up coming webinar on Identity and Access Management (IAM) in the Cloud titled;
The Perfect Storm: Managing Identity & Access in the Cloud
In this webinar and panel discussion we will talk about the key issues surrounding the people, processes and systems used to manage applications and data in the cloud. Topics covered will include;
Trends complicating secure cloud use;
Risks of unauthorized access, identity theft and insider fraud;
Challenges to IAM in the cloud;
Unique IAM considerations in the cloud;
Cloud features and functionality to improve IAM;
New approaches, including effective policy enforcement and the benefits of single sign-on;
Q and A panel session after the initial presentations.
This webinar is to be hosted by Tom Field of the Information Security Media Group, and we will by joined by thought-leaders from security vendors Ping Identity, McAfee and Aveksa, who will weigh in on how new cloud security solutions can help organizations improve IAM, as well as compliance, provisioning and policy management.
This should be a great presentation and discussion so please do register to view and participate;
Chris Hoff, who is the author of the Rational Survivability blog, gave a great closing keynote covering the last few years via his previous presentation titles and content. I can recommend reading / viewing the mentioned presentations. This was followed by a brief overview of current issues and trends, and then coverage of upcoming / very new areas of focus we all need to be aware of.
2008 – Platforms dictate capabilities (security) and operations – Read ‘The four horsemen of the virtualisation security apocalypse’
– Monolithic security vendor virtual appliances are the virtualisation version of the UTM argument.
– Virtualised security can seriously impact performance, resiliency and scalability
– Replicating many highly-available security applications and network topologies in virtual switches don’t work
– Virtualising security will not save you money. It will cost you more.
2009 – Realities of hybrid cloud, interesting attacks, changing security models – Read – ‘The frogs who desired a king – A virtualisation and cloud computing fable set to interpretive dance’
– Cloud is actually something to be really happy about; people who would not ordinarily think about security are doing so
– While we’re scrambling to adapt, we’re turning over rocks and shining lights in dark crevices
– Sure bad things will happen, but really smart people are engaging in meaningful dialogue and starting to work on solutions
– You’ll find that much of what you have works.. Perhaps just differently; setting expectations is critical
2010 – Turtles all the way down – Read – ‘Cloudifornication – Indiscriminate information intercourse involving internet infrastructure’
– Security becomes a question of scale
– Attacks on and attacks using large-scale public cloud providers are coming and cloud services are already being used for $evil
– Hybrid security solutions (and more of them) are needed
– Service transparency, assurance and auditability is key
– Providers have the chance to make security better. Be transparent.
2010 – Public cloud platform dependencies will liberate of kill you – Read ‘Cloudinomicon – Idempotent infrastructure, survivable systems and the return of information centricity’
– Not all cloud offerings are created equal or for the same reasons
– Differentiation based upon PLATFORM: Networking security, Transparency/visibility and forensics
– Apps in clouds can most definitely be deployed as securely or even more securely than in an enterprise
– However this often required profound architectural, operational, technology, security and compliance model changes
– What makes cloud platforms tick matters in the long term
2011 – Security Automation FTW – Read ‘Commode computing – from squat pots to cloud bots – better waste management through security automation’
– Don’t just sit there: it wont automate itself
– Recognise, accept and move on: The DMZ design pattern is dead
– Make use of existing / new services: you don’t have to do it all yourself
– Demand and use programmatic interfaces from security solutions
– Encourage networks / security wonks to use tools / learn to program / use automation
– Squash audit inefficiency and maximise efficacy
– DevOps and security need to make nice
– AppSec and SDLC are huge
– Automate data protection
2012 – Keepin it real with respect to challenges and changing landscape – Read – ‘The 7 dirty words of Cloud Security’
2012 – DevOps, continual deployment, platforms – Read – ‘Sh*t my Cloud evangelist says …Just not to my CSO’
– [Missing] Instrumentation that is inclusive of security
– [Missing] Intelligence and context shared between infrastructure and application layers
– [Missing] Maturity of “Automation Mechanics” and frameworks
– [Missing} Standard interfaces, precise syntactical representation of elemental security constructs
– [Missing] An operational methodology that ensures and commone understanding of outcomes and ‘agile’ culture in general
– New application architecture and platforms (Azure, Cloud foundry, NoSQL, Cassandra, Hadoop etc.)
– APIs – everything connected by APIs
– DevOps – Need to understand how this works and who owns security
– Programmatic (virtualised) Networking and SDN (Software Defined Network)
– Advanced adversaries and tactics (APTs, organised crime, nation states, using cloud and virtualisation benefits to attack us etc.)
– Security analytics and intelligence – security data is becoming ‘big data – Volume. Velocity. Variety. Veracity.
– AppSec Reloaded – APIs. REST. PaaS. DevOps. – On top of all the existing AppSec issues – how long has the OWASP top threats remained largely unchanged??
– Security as a Service 2.0 – “Cloud.” SDN. Virtualised.
– Offensive security – Cyber. Cyber. Cyber. Cyber… Instead of just being purely defensive, do things more proactive – not necessarily actually attacking them, can mean deceiving them to honeypots / honynets, fingerprinting the attack, tracking back the connections etc. all the way up to actually striking back.
– Public clouds are marching onward; Platforms are maturing… Getting simpler to deploy and operate and the platform level, but have heavy impact on application architecture
– Private clouds are getting more complex(as expected) and the use case differences between the two are obvious; more exposed infrastructure connected knobs and dials
– Hybrid clouds are emerging, hypervisors commoditised and orchestration / provisioning systems differentiate as ecosystem and corporate interests emerge
– Mobility (workload and consuming devices) and APIs are everywhere
– Network models are being abstracted even further (Physical > Virtual > Overlay) and that creates more ‘simplexity’
– Application and information ‘ETL sprawl’ is a force to be reckoned with
– Security is getting much more interesting!
This was a great wrap up highlighting the last few years’ issues, how many of these have we really fixed? Along with where we are now, and a nice wrap up of what’s coming up. Are you up to speed with all the current and outstanding issues you need to be aware of? How prepared are you and your organisation for what’s coming up? Don’t be like the 3 monkeys.. 😉
While the picture is complex and we have loads of work to do, Chris’s last point aptly sums up why I love security and working in the security field!
Keynote day 2 – panel discussion around ‘Critical Infrastructure, National Security and the Cloud.
Discussions around the role of ISPs in protecting the US from attacks, e.g. by dropping / blocking IP addresses / blocks of IP addresses from which attacks such as DDoS are originating from.
Should they be looking more deeply into packets in order to prevent attacks? What does this mean for net neutrality and freedom?
How does this apply to Cloud service providers (CSPs)? What happens when the CSP is subpoenaed by the courts / government to hand over data? This is another reason why you should encrypt your data in the cloud and ensure you manage the keys. This means the court / government has to directly subpoena you as the data owner and give you the opportunity to argue your case if they want access to your data.
Should the cloud be defined as critical infrastructure, if so which parts, which providers etc. Will need to clearly define what means critical infrastructure when discussing the cloud.
Next discussion point was China; Continuous economic growth means we are more and more involved in trade with China, however they are also stealing huge amounts of proprietary data across multiple industries and literally stealing all of their manufacturing data to copy what is made and how. According to some vendor reports 95% of all internet based theft of intellectual property comes from China. This is both from Chinese governmental bodies, and Chinese corporations.
Look up Internet Security Alliance documentation around securing, monitoring and understanding your global manufacturing supply chain. This document has been strongly resisted by both Chinese Government and companies. There is a clear need to protect sensitive information and work to reduce global supply chain risk. Us Government working on constant monitoring capabilities to help corporations monitor their global supply chains.
Proposed that IP theft should be on the agenda for the G20 next year. Also proposed the US and other countries should have an industrial policy, if they don’t already, that allows the military and intelligence communities to defend corporations and systems that are deemed part of the critical infrastructures.
Counterfeiting is also moving into cyberspace, what do we do with counterfeit infrastructure or counterfeit clouds?
A practical, step by step approach to implementing a private cloud
Preliminary points – have you ever decommissioned a security product? How many components / agents does the “AV” software on your laptop now have?
Why is security not the default?
Why would you not just put everything in the public cloud? – Risk, Compliance – you cannot outsource responsibility!
This is where ‘private cloud’ options come into play. Could also consider ‘Virtual private cloud’ – this is where VPN technology is used to create what is effectively a private cloud on public cloud infrastructure..
Many organisations have huge spare server capacity – typical results find 80% of servers only used at 20% capacity. You can create internal elasticity by making this spare capacity part of an internal, private cloud.
5 steps to a private cloud;
Identify a business need– what is your cloud driver? What will benefit from;
Increased speed to develop and release,
Elastic processes that vary greatly over time such as peak shopping days, or month end processing etc.
2. Assess your current infrastructure – is there excess capacity? Is the hardware virtualisation ready? Can your existing infrastructure scale? (Note that a cloud can be physical, not virtual if this is required). Is new cloud infrastructure needed? What are your storage requirements? What are your data recovery and portability requirements? How will you support a private cloud with your existing security tools and processes (e.g. where do you plug in your IPS?) – are your processes robust and scalable? – can you monitor at scale? Can you manage change at scale?
3. Define your delivery strategy – who are your consumers? Developers. Administrators. General employees. Other? Competency level of consumers defines the delivery means. (e.g. developers and admins may get CLI, General employees may get the ‘one click’ web portal). Delivery mechanism matters! Create a service catalogue. Ensure ‘Back end services’ are in place
4. Transformation – You cannot forklift into the cloud – legacy applications that do not scale horizontally will not work. More resources != greater performance. Need to design in scale and security. Modernise code and frameworks. Re-test – simulate cloud scale and failures. Re-think automation, scale.
5. Operationalize – Think about complete service life-cycle – deployment to destruction. Resilience. Where does security fit into this? – Everywhere! – whether applications or services. Secure design from the ground up – embed into architecture and design – then security no longer on the critical path to deployment!
Overall this was an entertainingly presented talk that was a little light on detail / content, but I thing the 5 points are worth bearing in mind if you are thinking or implementing a private cloud in your organisation.
Cloud security standards;
Talk over-viewing some of the current standards relating to cloud security. Below is a list of some of the cloud security standards / controls / architectures / guidance that you should aware of if you are working with or planning to work with any sort of public cloud solution.
– Cloud Security Reference Architecture
– Cloud security framework
– Guidelines for operational security
– Identity management of Cloud computing
– 27017 – guidelines on information security controls for the use of cloud computing services based on ISO/IEC 27002 2
– 27036-4 – Supply chain security: Cloud
– 27040 – Storage security
– 27018 – Code of practice for data protection controls for public cloud computing services
– SC7 – Cloud governance
– Controls for cloud computing security
– Additional controls for 27001 compliance in the cloud
– Implementation guidance for controls
– Data protection implementation guidance
– Supply chain guidance
– 800-125 – Guide to security for full virtualisation technologies
– 800-144 – Guidelines on security and privacy in public cloud computing
– NIST cloud reference architecture
– Identity in the Cloud
ODCA (Open Data Center Alliance) –
– Provider assurance usage model
– Security monitoring usage model
– RFP requirements
– Cloud Controls matrix
– Trusted cloud infrastructure
– Security as a Service
– Cloud trust protocol
– Guidance document
The CSA Cloud Controls Matrix maps many of these standards to cloud control areas with details of the specification and the standard components each specification meets / relates to.
While a pretty dry topic, this is a useful reference list if you are looking for more information on cloud / cloud security related standards and guidance.
This talk demonstrates some live tools and hacking demos, so starts with the standard disclaimer;
ALWAYS GET PERMISSION IN WRITING!
Performing scans, password cracking etc. against systems without permission is illegal.
Use any mentioned tools and URLs at your own peril!
CIA – Confidentiality, Integrity, Availability / Accountability / Auditability, while still important has gone out of the window in terms of being the core mantra for many security professionals and managers.
Evolution of the environment and hacking;
1st Age: Servers – FTP, Telnet, Mail, Web – the hack left a footprint
3rd Age: Virtual Hacking – Gaining someone’s password is the skeleton key to their life and your business. Accessing data from the virtual world can be simple – Simplest and getting easier!
Virtual World – with virtual back doors. This is the same for cloud computing and local virtual environments. What do you do to prevent your virtual environment administrators copying VMs and even taking these copies home? You need to prove both ownership and control of your data.
The question is posed – how much have we really learnt over the last 15 years or so? We need to go back to basics and re-visit the CIA model. Think of the concept of a ‘secure breach’, if our important data is protected and secure, being breached will still not gain access to this.
Demo against VMWare 4.1 update 1. Using a simple scan, you can find multiple VMware serers and consoles directly to the internet, remember though these attacks can easily be launched from within your environment.
Outside of this talk, this raises the question – how segregated are your networks. Do you have separate management, server, and database etc. networks with strong ACL policies between them? If not I’d recommend re-visiting your network architecture. Now.
Once you find a vCentre server, the admin / password file is easily accessible and only hashed in in MD5. This can be broken with rainbow tables very quickly. You can then easily gain access to the console and thus control of the whole environment.
To make things even easier tools like metasploit make this sort of attack as simple as a series of mouse clicks. I’d recommend checking out metasploit, it’s a great tool.
Look at www.cvedetails.com for details on just how many vulnerabilities there are, this site also classifies the vulnerabilities in terms of criticality and whether they impact CIA. This is a great input into any risk assessment process.
This is described as a password recovery tool, but can do so much more. A prime example of the abilities of this tool is Arp poisoning such that you can see all the traffic on a given subnet / vlan. I have personally used this to record (with approval of course!) VOIP calls in order to demonstrate the need to encrypt VOIP traffic. Cain even nicely reconstructs individual call conversations for you!
This is another personal favourite of mine – if your VOIP is not encrypted, why not? Does your board know if is trivially easy to record their calls or those of finance and HR etc. on your network?
Talk went on to cover some further easy attacks such as those using the power of Google search syntax to gain information such as from Dropbox, Skydrive, Google Docs etc. An example was finding Cisco passwords in Google docs files. This leads onto another question, are you aware of just how much data your organisation has exposed in the wild to people who merely know how to search intelligently and leverage the powerful searching capabilities of engines such as Google?
To make things even easier, Stach and Liu have a project called ‘Google Hacking Diggity Project’ that has created a feely downloadable tool for creating complex Google / Bing searches with specific tasks in mind such as hacking cloud storage etc.
This and various other attack and defence tools can be downloaded here;
I’d recommend you work with your organisation to use these constructively in order to understand your exposure and then plan to remediate any unacceptable risks you discover. The live demonstration actually found files online with company usernames and passwords in, so this exposure is demonstrably real for many organisations.
Talk ended with a brief comment on social networking and how the data available here such as where you are from, which schools you went to etc. can give hackers easy access to the answers to all your ‘secret’ questions.
Remember the term ‘secure breach’ – are important data is all encrypted with strong, robust processes. We were hacked, but it doesn’t matter. The CI part of CIA is critical!
I loved this talk, some great demos and reminders of useful tools!
As mentioned at the start, please be sensible with the use of any of these tools and gain permission before using them against any systems.
Cloud computing’s impact on future enterprise architectures
This talk was fairly light and I didn’t make a huge amount of notes, but thought there were a few points worth noting;
Definitions and boundaries are changing. Instead of defined boundaries we are used to around traditional architectures whether they are hosted locally or at a data-centre we are moving to much more fluid and interconnected architectures. Consider personal cloud, private cloud, hybrid cloud, extended virtual data-centres, consumerism, BYOD etc. The cloud creates different, co-existing architectural environments based on combinations of these models.
Consider why you should move to the cloud, which characteristics are important for your organisation such as;
– Elastically scalable
– Self service
– Measured services
– Virtualised and dynamic
– Reliability (SLAs, what happens when there are issues etc.)
– Economic benefits (cost reduction – TCO, and / or better resiliency)
Do you understand any potential risks;
– What are the security roles and responsibilities? –
IaaS – you
BPaaS (business process as a service) – Them
Sliding scale from IaaS – PaaS – SaaS – BpaaS
– Where is your data?
Your business and regulatory requirements
Jurisdictional rules – who can access your data
Legal / jurisdictional issues amplified
For me some of this talk was outdated, with a lot of focus on where is your data; While where is my data is a key question, there was too much focus on the fact your data will be anywhere in the world with global CSPs, when most big players now offer guarantees that you data will stay within defined regions if you want it to.
So, what does this mean for your ‘future’ cloud based enterprise architecture principles, concepts etc.?
– Must standardise on ‘shared nothing’ concept
– Standardise on loosely coupled services
– Standardise on ‘separation of concerns’
– No single points of failures
– Multiple levels of protection / security
– Ease of <secure> access to data
– Security standards to protect data
– Centralise security policy
– Delegate or federate access controls
– Security and wider design patterns that are easy to adopt and work with the cloud
Combining these different architectural styles is a huge challenge.
Summary – Dealing with multiple architectures, multiple dimensions and multiple risks is a key challenge to integrating cloud into your environment / architecture!
The slides from this talk can be downloaded here;
SOA (Service Orientated Architecture) environments are a big data problem / Big data and its impact on SOA
Outside of some product marketing for Splunk, the premise of these two talks was basically the same, that large SOA environments are complex, need a lot of monitoring and create a lot of data.
Splunk is incidentally is a great open source product for log monitoring / data collection, aggregation and analysis / correlation. Find out more about it here; http://www.splunk.com/
SOA – great for agility, but can be complex – BPEL, ebXML, WSDL, SOAP, ESB, XML, BPM, UDDI, Composition, loose coupling, orchestration, data services, business processes, XML Schema, registry etc.. This can generate a huge amount of disparate data that needs to be analysed in order to understand the system. Both machine and generated data may need to be aggregated.
SOA based systems can themselves generate big data!
We all know large web based enterprises such as Google and Facebook etc. have to deal with big data, but should you care? Many enterprises are now having to understand and deal with big data for example;
Retail and web transaction data
GPS in phones
Log file monitoring and analysis
The talks had the following conclusions;
– Big data has reached the enterprise
– SOA platforms are evolving to leverage big data
– Service developers need to understand how to insert and access data in Hadoop
– Time-critical conditions can be detected as data inserted in Hadoop using event processing techniques – Fast Data
– Expect big data and fast data to become ubiquitous in SOA environments – much like RDBMS are already.
So I’d suggest you become familiar with what big data is, the tools that can be used to handle and manage it such as Hadoop, MapReduce and PIG (these are relatively big topics in themselves and may be covered at a later date)
The slides from these talks can be downloaded from the below locations;
Time for delivery; Developing successful business plans for cloud computing projects
This talk covered some great points around areas to consider when planning cloud based projects. I’ll capture as much as I managed to make notes on, as there was a lot of content for this one. I’d definitely recommend checking out the slides!
Initial things to consider include;
– Defining the link between your business ecosystem and the available types of cloud-enabled technologies
– Identifying the right criteria for a ‘cloud fit’ in your organisation. (operating model and business model fit)
– Strategies and techniques for developing a successful roadmap for the delivery of cloud related cost savings and growth.
– Mobility – any connection, any device, any service
– Social Tools – any community, any media, any person
– Cloud – computing resources, apps and services, on demand
– Big Data – real time information and intelligence
In a nice link with the talk on HPC in the cloud, this one also highlighted the competitive step change that cloud potentially is; small companies can have big company levels of infrastructure, scalability, growth etc. Anyone can access enterprise levels of computational power.
Cloud computing can be used to drive a cost cutting / management strategy and a growth / agility strategy.
Consider your portfolio and plans – what do you want to achieve in the next 6 months, next 12 months etc.
When looking at the cloud and moving to it, what are the benefit cases and success measures for your business? These should be clearly defined and agreed in order for you to both plan correctly, and clearly understand if the project / migration has been a success.
What is your business model, and which cloud service business models will best fit with this? What is the monetization strategy for your cloud migration project; Operational, Growth, Channel etc. Initially cloud based projects are often driven by cost saving aspirations, however longer term benefits will likely be better if the drivers are better and faster, cost benefits (or at least higher profits!) will follow. To be successful, you must decide and be clear on your strategy!
As with all projects, consider your buy vs. build options.
Is IT a commodity or something you can instil with IP? Depending on your business you will be at different places on the continuum. Most businesses can and should derive competitive advantage by putting their skills and knowledge into their IT systems rather than using purely SaaS or COTS solutions without at least some customisation. This of course may only be true for systems relating to your key business, not necessarily supporting and administrative systems.
Cloud computing touches many strategies – you need a complete life-cycle 360 approach.
– Storage strategy
– Compute strategy
– Next gen network strategy
– Data centre strategy
– Collaboration strategy
– Security strategy
– Presence strategy
– Application / development strategy
Consider the maturity of your services and their roadmap to the cloud;
Service Management – Service integration – Service Aggregation – Service Orchestration
This talk highlights just how much there is to think about when planning to migrate to, or make use or, the cloud and cloud based services.
The talk also highlighted a couple of interesting things to consider;
Look up ‘The Eight Fallacies of Distributed Computing’ from 1993, and ‘Brewer’s Theorem’ from 2000 (published in 2002) to understand how much things have stayed the same just as much as how much they have changed!
The main premise of this talk was that you need to understand the cloud paradigm when designing services that you plan to run in the cloud. Everything you do in the cloud costs, minimise unnecessary actions and transactions.
Why is the cloud an attractive solution? – Cloud computing characteristics..
– Uses shared and dynamic infrastructure
– Elastic and scalable (horizontally NOT vertically!)
– On demand as a service (self-service)
– Meters consumption
– Available across common networks
Features you should consider for any services that will be hosted in the cloud; where + indicates patterns / beneficial designs, – indicates ‘anti-patterns / designs that will be more challenging to run successfully in the cloud;
– Motivation – Hardware will fail, software will fail, people will make mistakes
This talk was one of my favourites, and somethign I find very interesting. Traditionally High Performance Computing (HPC) has been the preserve of large corporations, reasearch depatements or governements. This is due to the size and complexity of computing environemtns required in order to perform HPC. With the advent of HPC in the cloud access to this level of compute resource is becoming much more widespread. Both the cost of entry and expertise requried to set up this type of environment are lowering dramatically. Cloud service providers are setting up both tradtional CPU based HPC offerings, and the newer, potentially vastly more powerful, GPU (Graphics Processor) based HPC offerings.
Onto the talk;
Cloud HPC can bring HPC levels of computational power to normal businesses for things like month / year level processing, and risk calculating etc.
In order to think about how you can use HPC, look to nature for inspiration – longest chain – how small a pieces can a process be broken down into in order to parallelise it?
– Traditional HPC – message passing (MPI), head node and multiple compute nodes, backed by shared storage. Scale issues – storage performance (use expensive bits)
– Newer HPC, more ‘Hadoop’ type model, data stored on compute (worker) nodes – they then just send back their results to the master node(s).
Look at things like hive and pig that sit atop hadoop. More difficult to set up than MPI.
– Newest HPC – GPU – simpler cores, but many of them.
CPU – ~10 cores maximum. CPU – hundreds cores (maybe thousands).
Some super computers looking at 1000’s GPUs in a single computer.
4.5 teraflop graphics card < $2000!!
Cloud scale vs. on premise –
– On premise = measured by rack at a time.
– Cloud = lorry trailers added by simply plugging in network, cooling and power then turning on, left until enough bits fail, then returned to manufacturer..
Cloud = Focused effort! – Cloud power managed by CSP, researchers work.. No need for huge amount of local infrastructure.
How to move to the cloud, largely as with other stuff –
– Go all in – pure cloud. –
MPI cluster – just have images of head and compute nodes – scale out. 10 node cluster hosted on amazon made top 500 computer list with minimal effort in setup / config.
Platform as a service – e.g. Apache Hadoop – based services for windows azure – just go through how big you want the cluster through the web interface – has excel interface already so excel can directly use this cluster for complex calculations!
– Go hybrid – add compute nodes from the cloud to existing HPC solution (consider – latency issues, and security issues (e.g. VPN to the cloud)).
You really don’t care about how the technology works. Only how it helps you work!
Final note – GPU development is currently mostly proprietary and platform specific. Microsoft is pushing their proposed open standard that treats CPU and CPU as ‘accelerators’ it does abstraction at run time rather than compile time. This would allow much greater standardisation of HPC development as it abstracts the code from the underlying processing architecture.
These are exciting times in the HPC world and I’d expect to see a lot more people / companies / research groups making use of this type of computing in the near future!
Today was the second day of the Service Technology Symposium. As with yesterday I’ll use this post to review the keynote speeches and provide an overview of that day. Where relevant further posts will follow, providing more details on some of the days talks.
As with the first day, the day started well with three interesting keynote speeches.
The first keynote was from the US FAA (Federal Aviation Administration) and was titled ‘SOA, Cloud and Services in the FAA airspace system’. The talk covered the program that is under-way to simplify the very complex National Airspace System (NAS). This is the ‘system of systems’ that manages all flights in the US and ensures the control and safety of all the planes and passengers.
The existing system is typical of many legacy systems. It is complex, all point to point connections, hard to maintain, and even minor changes require large regression testing.
Thus a simplification program has been created to deliver SOA, web centric decoupled architecture. To give an idea of the scale, this program is in two phases with phase one already largely delivered yet the program is scheduled to run through 2025!
as mentioned, the program is split into two segments to deliver capabilities and get buy in from the wider FAA.
– Segment 1- implemented set of federated services, some messaging and SOA concepts, but no common infrastructure.
– Segment 2 – common infrastructure – more agile, project effectively creating a message bus for the whole system.
The project team was aided by the creation of a Wiki, and COTS (commercial off the shelf) software repository.
They have also been asked to assess the cloud – there is a presidential directive to ‘do’ cloud computing. They are performing a benefits analysis from operational to strategic.
Key considerations are that cloud must not compromise NAS, and that security is paramount.
The cloud strategy is defined, and they are in the process of developing recommendations. It is likely that the first systems to move to the cloud will be supporting and administrative systems, not key command and control systems.
The second keynote was about cloud interoperability and came from the Open Group. Much of this was taken up with who the Open Group are and what they do. Have a look at their website if you want to know more;
Outside of this, the main message of the talk was the need for improved interoperability between different cloud providers. This would make it easier to host systems across vendors and also the ability of customers to change providers.
As a result improved interoperability would also aid wider cloud adoption – Interoperability is one of the keys to the success of the cloud!
The third keynote was titled ‘The API economy is here: Facebook, Twitter, Netflix and YOUR IT enterprise’.
API refers to Application Programming Interface, and a good description of what this refers to can be found on Wikipedia here;
The focus of this keynote was that making APIs public and by making use of public APIs businesses can help drive innovation.
Web 2.0 – lots of technical innovation led to web 2.0, this then led to and enabled human innovation, via the game changer that is OPEN API. Reusable components that can be used / accessed / built on by anyone. Then add the massive, always on user base of smartphone users into the mix with more power in your pocket than needed to put Apollo on the moon. The opportunity to capitalise on open APIs is huge. As an example, there are currently over 1.1 million distinct apps across the various app stores!
Questions for you to consider;
1. How do you unlock human innovation in your business ecosystem?
– Unlock the innovation of your employees – How can they innovate and be motivated? How can they engage with the human API?
– Unlock the potential of your business partner or channel sales community; e.g. Amazon web services – merchants produce, provide and fulfil goods orders, amazon provides the framework to enable this.
– Unlock the potential of your customers; e.g. IFTTT (If This Then That) who have put workflow in front of many of the available APIs on the internet.
2. How to expand and enhance your business ecosystem?
– Control syndication of brand – e.g. facebook ‘like’ button – everyone knows what this is, every user has to use the same standard like button.
– Expand breadth of system – e.g. Netflix used to just be website video on demand, now available on many platforms – consoles, mobile, tablet, smart TV, PC etc.
– Standardise experience – e.g. kindle or Netflix – can watch or read on one device, stop and pick up from the same place on another device.
– Use APIs to create ‘gravity’ to attract customers to your service by integrating with services they already use – e.g. travel aggregation sites.
This one was a great talk with some useful thought points on how you can enhance your business through the use of open APIs.
On this day I fitted in 6 talks and one no show.
Talk 1 – Cloud computing’s impact on future enterprise architectures. Some interesting points, but a bit stuck in the past with a lot of focus on ‘your data could be anywhere’ when most vendors now provide consumers the ability to ensure their data remains in a specific geographical region. I wont be prioritising writing this one up so it may or may not appear in a future post.
Talk 2 – Using the cloud in the Enterprise Architecture. This one should have been titled the Open Group and TOGAF with 5 minutes of cloud related comment at the end. Another one that likely does not warrant a full write up.
Talk 3 – SOA environments are a big data problem. This was a brief talk but with some interesting points around managing log files, using Splunk and ‘big data. There will be a small write up on this one.
Talk 4 – Industry orientated cloud architecture (IOCA). This talk covered the work Fulcrum have done with universities to standardise on their architectures and messaging systems to improve inter university communication and collaboration. This was mostly marketing for the Fulcrum work and there wasn’t a lot of detail, this is unlikely to be written up further.
Talk 5 – Time for delivery: Developing successful business plans for cloud computing projects. This was a great talk with a lot of useful content. It was given by a Cap Gemini director so I expected it to be good. There will definitely be a write up of this one.
Talk 6 – Big data and its impact on SOA. This was another good, but fairly brief one, will get a short write up, possibly combined with Talk 3.
And there you have it that is the overview of day two of the conference. Looks like I have several posts to write covering the more interesting talks from the two days!
As a conclusion, would I recommend this conference? Its a definite maybe. Some of the content was very good, some either too thin, or completely focussed on advertising a business or organisation. The organisation was also terrible with 3 talks I planned to attend not happening and the audience totally left hanging rather than being informed the speaker hadn’t arrived.
So a mixed bag, which is a shame as there were some very good parts, and I managed to get 2 free books as well!
So yesterday was day one of the Service Technology Symposium. This is a two day event covering various topics relating to cloud adoption, cloud architecture, SOA (Service Orientated Architecture) and big data. As mentioned in my last post my focus has mostly been on the cloud and architecture related talks.
I’ll use this post to provide a high level overview of the day and talks I attended, further posts will dive more deeply into some of the topics covered.
The day started well with three interesting keynotes.
The first was from Gartner covering the impact of moving to the cloud and using SOA on architecture / design. The main points of this talk were understanding the need to move to a decoupled architecture to get the most from any move to the cloud. This was illustrated via the Any to Any to Any architecture paradigm where this is;
Any Device – Any Service – Any Data
Gartner identified a ‘nexus of forces’ driving this need to decouple system component;
– Mobile – 24/7, personal, context aware, real time, consumer style
– Social – Activity streams, Personal intelligence, group sourcing, group acting
– Information – variety, velocity, volume, complexity
– Cloud services
In order to achieve this, the following assumptions must be true; All components independent and autonomous, they can live anywhere (on premise or in cloud), applications must be decoupled from services and data.
They also highlighted the need for a deep understanding of the SOA principles.
The second keynote speech was from the European Space Agency on their journey from legacy applications and development practices to SOA this was titled ‘Vision to reality; SOA in space’.
They highlighted 4 drivers for their journey; Federation – Interoperability – Alignment to changing business needs / requirements (agility) – Reduce time and cost.
And identified realising these drivers using SOA, and standards as outlined below;
Federation – SOA, Standards
Interoperability – SOA, Standards
Alignment to business needs – SOA, Top Down and Bottom up
Reduce costs – Reuse; SOA, Incremental development
Overall this was an interesting talk and highlighted a real world success story for SOA in a very complex environment.
The third keynote was from NASA Earth Science Data Systems. This provided an overview of their use of SOA, the cloud and semantic web technologies to aid their handling of ‘big data’ and complex calculations. They have ended up with a globally diverse hybrid cloud solution.
As a result of their journey to their current architecture they found various things worthy of highlighting as considerations for anyone looking to move to the cloud;
– Understand the long term costs of cloud storage (cloud more expensive for their needs and data volumes)
– Computational performance needed for science – understand your computational needs and how they will be met
– Data movement to and within the cloud – Data ingest, data distribution – how will your data get to and from the cloud and move within the cloud?
– Process migration – moving processes geographically closer to the data
– Consider hybrid cloud infrastructures, rather than pure cloud or pure on premises
– Security – always a consideration, they have worked with Amazon GovCloud to meet their requirements
To aid their move to SOA and the cloud, NASA created various working groups – such as – Data Stewardship, Interoperability, semantic technologies, standards, processes etc.
This has been successful for them so far, and currently NASA Earth Sciences make wide use of SOA, Semantic technologies and the cloud (esp. for big data).
The day then moved to 7 separate track of talks which turned out for me to be somewhat of a mixed bag.
Talk 1 was titled ‘Introducing the cloud computing design patterns catalogue’. This is a relatively new project to create re-usable deign patterns for moving applications and systems to the cloud. The project can be found here;
Unfortunately the intended speaker did not arrive so the talk was just a high level run through the site. The project does look interesting and I’d recommend you take a look if you are involved in creating cloud based architectures.
The second talk was supposed to be ‘A cloud on-boarding strategy’ however the speaker did not turn up, and the organisers had no idea if he was coming or not so wasted a lot of peoples time. While it’s outside of the organisers control if someone arrives or not, they should have been aware the speaker had not registered and let us know rather than the 45 minutes of is he, isn’t he, we just have no idea that ensued..
The third talk was supposed to be ‘developing successful business plans for cloud computing projects’. This was again cancelled due to the speaker not arriving.
Talk 2 (talks numbered by my attendance) was a Gartner talk titled ‘Building Cloudy Services’. This was an interesting talk that I’ll cover in more depth in a following post.
Talks three to five were also all interesting and will be covered in some more depth in their own posts. They had the below titles;
Talk 3 was titled ‘HPC in the cloud’
Talk 4 was titled ‘Your security guy knows nothing’
Talk 5 was titled ‘Moving applications to the cloud’
The final talk of the day was titled ‘Integration, are you ready?’ This was however a somewhat misleading title. This talk was from a cloud ESB vendor and was basically just an advertisement for their product and how great it was for integration. not generally about integration. Not what you expect from a paid for event. I’ll not mention their name other than to say they seem to have been inspired by a piece of peer to peer software.. Disappointing.
Overall, despite some organisational hiccups and a lack of vetting of at least one vendors presentation, day one was informative and interesting. Look out for more detailed follow up posts over the next few days.