Gartner Security and Risk Summit; Cool Vendors

 

Hi All,

I know I promised a post on the insider threat and how to best manage the risk.. That is on it’s way, it’s a big topic!

In the mean time I attended the first day of the recent Gartner Security and Risk Management Summit earlier this week.

While not deeply technical or focussed on a specific risk topic, the presentation on their top 10 ‘cool vendors’ was quite interesting.  In a similar way to my recent ‘Innovative End User Technology Security’ post, this one will hopefully give you some new vendors to consider when solving issues for your business.

The Gartner definition of ‘Cool Vendors’ is that they are;

  • Technologies that help security leaders embrace;
    • New approaches to business enablement
    • New approaches to threat prevention
    • New responsibilities for IoT, OT and embedded systems
  • On the left of their own ‘hype cycle’

They must however be real vendors with solutions that are available today, not vapourware or soon to be released.

The recommendation is that action, even if it is just investigation and understanding, is needed today.  This is to help ensure the security of your organisation today and tomorrow.

Things you should be asking when looking at your organisations security architecture and defence in depth / diversity strategy;

What technology areas should information security invest in, to;

  • Protect digital assets from advanced and targeted threats?
  • More rapidly adapt to changing digital business requirements?
  • Support building a next-generation intelligent SOC capability?

Which interesting vendors and solutions should be investigated in order to achieve these goals?

The presentation split the ‘cool vendors’ into 10 categories across 3 groups;

  1. Threat Facing
  2. Enablement and Access Facing
  3. Intelligence-Driven SOC

 

  1. Threat Facing

These are technologies primarily aimed at detecting or preventing malware and attackers.

EDR – Endpoint Detection and Response

New solutions that aim to respond to advanced attacks that evade traditional endpoint protection solutions.  If you know compromise is inevitable and are looking at ways to improve your end point protection companies in this space should be considered.

Example players in this space include;

  • Tanium
  • CounterTack
  • Carbon Black
  • Cisco
  • FireEye
  • Cybereason
  • CrowdStrike
  • RSA
  • Ziften
  • Triumfant
  • Confer
  • Bromium
  • Invincea
  • Symantec
  • Intel
  • Trend Micro

Non Signature Approaches for Endpoint Prevention

Solutions that use technologies such as machine learning, exploit prevention and memory injection prevention.  The aim of these is to supplement or replace traditional signature based / ‘heuristic’ anti malware solutions.  Another possible application is where project to implement timely patching and maintenance of systems have stalled and compensating controls are required.

Example players in this space include;

  • Cylance
  • Palo Alto Networks
  • SentinelOne
  • Morphisec
  • Bromium
  • Deep Instinct
  • Invincea

Remote Browser

These are solutions that separate the browser function from the local desktop.  The premise being that a lot of attacks originate from malicious or compromised sites on the internet.  If you can separate the browser into a secure environment and effectively just send a video and audio stream to the desktop you can prevent these attacks.  This is the category that the Garrison solution I previously wrote about fits into.

Example players in this space include;

  • Spikes Security
  • Menlo Security
  • Light Point Security
  • Authentic8
  • Fireglass

Microsegmentation and Flow Visibility

These solutions can provide visibility can control of east-west traffic flows across the enterprise.  The aim of this is to detect and prevent lateral movement of attackers or malicious users across the network.

Example players in this space include;

  • VMware
  • Cisco
  • Illumio
  • vArmour
  • Trend Micro
  • Catbird
  • CloudPassage
  • GuardiCore

Deception

Technologies designed to device attackers into thinking closely monitored security systems are real business systems hosting data they would want to access.  These have been around for a long time and are often referred to as ‘honeypots’ or ‘honey nets’.  Recently some technologies have become a lot more mature and realistically deployable.  Businesses are also increasingly understanding the need for more advanced security solutions.

Example players in this space include;

  • Attivo Networks
  • TrapX Security
  • Cymmetria
  • GuardiCore
  • illusive networks
  • Javelin Networks

 

2. Enablement and Access Facing

Cloud Access Security Brokers (CASB)

The aim of these solutions is to provide a single point of control for cloud use in the organisation.  These can detect, control and apply various security functions such as access control lists and encryption to cloud use.

Example players in this space include;

  • Skyhigh Networks
  • Netskope
  • CipherCloud
  • Microsoft (Adallom)
  • CloudLock
  • Blue Coat (Elastica, Perspecsys)
  • FireLayers
  • Palerra

User and Entity Behavioural Analytics

No presentation this year would be complete without a mention of behavioural analytics of some sort!

The aim or user and entity behavioural analytics is to analyse and correlate user behaviour across systems and networks for indications or malicious behaviour.  This is in order to detect things like compromised accounts or malicious insiders.

Example players in this space include;

  • Securonix
  • Gurucul
  • Fortscale
  • Splunk
  • Niara
  • Interset
  • E8 Security
  • LightCyber
  • Microsoft
  • Rapid7
  • Exabeam
  • Forcepoint
  • Bay Dynamics
  • BottomlineTechnologies
  • CynetSystems
  • DtexSystems

Pervasive Trust Services

This is a particularly interesting area.  These are trust services that are designed to scale to cover billions of devices, including IoT devices that may have limited processing capability.

This requires a fundamental paradigm shift to the web of trust model with distributed consensus.  We must realise trust is shades of grey, not the traditional yes / no authentication.  If the trust is higher than the risk, proceed.

This is another area I’m likely to write up in more detail as it is an exciting space.  Likely to become a lot more relevant as IoT grows, and also as regulations like PSD2 / GDPR come into play that require more identification and authentication for every payment.

Example players in this space include;

  • Certes Networks
  • CSS
  • ForgeRock
  • ARM Holdings (Sansa Security)
  • Guardtime
  • HyperledgerProject
  • Tyfone

Security Testing for DevOps

Tools and solutions that enable the integration of security testing into the automated DevOps workflow.  This enables secure development and applications, without adversely impacting delivery timelines.

Example players in this space include;

  • Hewlett Packard Enterprise(HPE)
  • IBM
  • Veracode
  • Amazon
  • Contrast Security
  • Synopsys (Quotium)
  • Immunio
  • SecuPi
  • Sonatype
  • Black Duck

3. Intelligence-Driven SOC

These are solutions that aim to provide greater intelligence and orchestration to the SOC (Security Operations Centre) in order that it can scale and spot the key security events.  These tools also enable greater use of threat intelligence feeds to support the SOC.

Example players in this space include;

  • CyberSponse
  • Hexadite
  • I.D. Systems
  • Phantom Cyber
  • Swimlane
  • IBM (Resilient Systems)
  • FireEye (Invotas)

 

I hope this has provided a useful overview of some key areas you should be thinking about in your security strategy.  The companies to look into are a mix or new players and more established companies trying to get into new areas either via development or acquisition – as always interesting times in the security space!

Many of these, especially areas like behaviour analytics and trust are getting a lot of hype, so be prepared for questions from your more security aware board members!

Feel free to ask any questions you have.

K

 

 

Threat intelligence services – Why, What and Who

This was another Gartner talk covering the threat intelligence landscape, what you can expect, and things to consider.

Where did that come from?!

Important concept: “Threat”; 

  • A threat exploits a vulnerability resulting in an incident
    • Threat – you can’t control this, you can only be well informed and plan for it’s arrival
    • Vulnerability – you can control and understand these – secure coding, defiance in depth, vulnerability databases etc.
    • Incident – you want to avoid this!!

The problem is getting the Visibility…

  • The bad guys follow the same lifecycle that we do..
    • They talk and research – planning – perhaps up to a year or more
    • They customise attacks – build
    • They attack – run

Without threat intelligence your view looks like;

  • Ignorance (they are researching)
  • Ignorance (they are planning)
  • Hacked (they are running their attack)

Understanding upcoming threats allows you to match defences and mitigations required to your strategic planning cycle.  To do this we need good information on what is coming up, and what the bad guys are discussing for the future.

 Important concept: “Intelligence”

  • Goes beyond the obvious, trivial, or self evident:
    • developed by correlating and analysing multiple data sources / points
  • Includes a range of information, for example:
    • Goals of the threat actor
    • Characteristics of the threat, and potential organisational outcomes if it is successfully executed
    • Indicators and defences
    • Life expectancy of the threat
    • Reliability of the information
    • Use it to:
      • Avoid the threat
      • Diagnose an incident
      • Support decisions on how to invest in security (strategic planning)

Reliability and planning horizon are key considerations;

  • Network traffic feeds – automated information feeds – very reliable, but not real intelligence – good for immediate issues, not for planning.  Inexpensive
  • Operational intelligence – combination of automated and human, e.g. malware analysis, more intelligent that above, good for immediate planning, reasonably reliable (for short term).  still relatively inexpensive.
  • Strategic intelligence – Can be very tailored to your organisation, great deal of human interaction, custom made research, some human judgement.  Reasonably reliable, but as planning goes further out obviously reliability lowers as criminals can change plans.  Expensive, but great for strategic planning especially if you are in a high risk industry or organisation.
  • Snake oil – no one can predict 3-5 years out with certainty, so don’t believe anyone who says they can..

Recommendations;

  • Use dedicated services to plan for long term strategies, and ensure you are concerned about the right threats.
    • It can take up to two years to be ready for an emerging threat.
  • Plan – How will you use the service?  How will it be consumed? Who will consume it?
  • Consider whether you need just the threat intelligence, or adjacent services as well.
  • Before using, engage heavily with the vendor;
    • How flexible are they to your needs?
    • Will they go outside of the contract in an emergency or to assist you?
    • How well can you work with them – need a good, trusted and close working relationship with them.

 

If you are considering a threat intelligence service, this talk raises come great points to consider.  For me, they key point is how well you can work with them.  For these service to be successful you need to work very collaboratively together and they need to have a deep understanding of your specific business and concerns as well as just the industry sector.  Another recommended talk.

K

Web Application Firewalls

This talk from Gartner covered WAFs, their functionality, if they are required and possible alternatives;

Software security is improving but hasn’t caught up with the threat landscape.

Attackers have Motivation, Times, Expertise and many targets.

Software security can be improved by better education, QA, SDLC, Frameworks and tools.

  • This helps close the gap, but it still remains
  • Many legacy applications or components will exist for a long time

 Defence in depth approach is required to protect applications;

  • Firewall – allows or blocks traffic based on IP and port – positive security model; Deny all traffic unless explicitly allowed
  • NIPS (Network based Intrusion Prevention System) – Negative security model: Signatures and protocol validation
  • WAF – Identifies and blocks application layer attacks
    • Negative security model – Fixed rules, Blacklist known bad, expert deployment
    • Positive security model – Automatic application behaviour learning, whitelist known good, stratighf forward deployment model
    • Passively block or actively modify traffic to prevent specific attacks

Additional functionality over other network security tools found in many WAFs;

  • Authentication and authorisation
  • ADC functionality
  • SSL termination
  • Anti-scraping
  • Threat intelligence
  • Content inspection, data masking, and DLP

 Differentiators;

  • All have basic signatures and filtering
  • Differ in;
    • Level of granularity
      • policies per application
      • policies per url
      • fully scriptable rule engine vs. high level settings
    • Positive model capabilities
    • Additional functionality
    • Deployment methods

Interest in WAF from a business risk perspective is increasing: 

  • Protects against identified vulnerabilities: Buys time as a quick fix, and provides long-term mitigation for legacy Web applications.
  • Protects against generic classes of attacks, such as SQL injection and brute force.
  • Protects against attacks targeted at your application: Requires active response and granular policy settings.

Also, do not underestimate the benefits of the extras such as performance, caching, authentication..

What are the latest developments in WAF technology?

  • Evolution in data interchange and protocol standard support, such as JSON, XML, GWT, HTML5, SPDY, IPv6
  • User and device validation and integration with Web fraud prevention:
    • True source/real IP identification proxies
    • Geolocation and reputation services
    • Injection/Execution of code for user validation and rudimentary fraud detection
  • Increasing support for Web vulnerability scanners (DAST): “Virtual patching”
  • Support for virtualisation and SaaS Web applications, and cloud delivery options for WAF
  • Improved layer 7 DDoS protection

WAFs, are they viable for the future?

Yes..

  • They provide application layer functionality largely unavailable in many other network based defences.  They should be considered as part of your defence in depth profile for any web applications.
  • Cloud based solutions may become more viable
  • Detection quality will improve as they better understand your applications and also the browsers capabilities
  • Detection engine improvements will be required in order to keep up with evolving threats
    • But must not impact performance!
  • Must scale with the web applications.
    • Virtualisation support is critical

What alternatives are there?

  • Secure coding the the main alternative.  This sounds imple, however…
    • History shows that this fails
      • Bad scalability
      • Much insecure legacy code
      • No control over code – software from vendors, third party code etc.
    • Some functionality may be subsumed into other technology such as ADC (Application Delivery Controller) and CDN (Content Delivery Network) – so watch these spaces.
    • NGFW (Next Generation Firewall) and NGIPS (Next Generation Intrusion Prevention System) are becoming more application aware, but do not and are unlikely to ever deliver full WAF functionality

Recommendations;

  • Determine use case;
    • Compliance – buy “anything”…
    • Security – Buy a leader with low false positives and simple management
    • Application security – buy as part of an application initiative, ensure advanced policies are supported
  • If you have ADCs – asses the capabilities of these
  • Track CDN WAF capabilities
  • Complement with comprehensive monitoring and alerting capabilities

This was a very interesting, vender neutral talk that provides a good intro to WAFs, and some useful thoughts on implementing them and possible future enhancements.  Recommended.

K

Gartner Security and Risk Management conference – Software Defined Networking

This was an introductory talk around Software Defined Networking (SDN) and some of it’s security implications.

What is it?

  • Decoupling the control pane from the data plane and centralising logical controls
  • Communication between network devices and SDN controllers is with both open and proprietary protocols currently – no single standard..
  • SDN controller supports open interface to allow external programability of the environment

– Controller tells each node how to route, vs. current where each node makes it’s own routing decisions.

 How do I enforce network security in an SDN environment?

  • Switch as the Policy enforcement point
    • Switch tells controller it’s seen traffic with certain flow characteristics, Flow controller tells it what to do with the flow, and this information is cached in the local flow table for a specified time.  Another flow arrives and this one is not permitted, so the controller tells the switch to just drop the packets – switch effectively becomes a stageful firewall.
    • Existing controls such as DLP, Firewalls, Proxy servers etc. can all be used with SDN –
      • e.g. someone tries to connect to the internet – flow controller instructs switch to send traffic to the firewall / IPS / DLP server etc.
      • e.g. sending email – no matter where it’s going flow says first point is DLP, then firewall, then onto destination
      • This means devices no longer need to be inline – they can be anywhere on the network.  Flow controller just needs to know where to send certain traffic types!
    • Incoming flows can be treated in the same way
      • Something changes – such that it looks like DDoS – traffic can be routed to the DDoS protection device(s)

What risks does SDN introduce?

  • Risk is aggregated in the controller
    • Malicious or accidental changes could remove some or all of the security protections
  • Integrity of of the Flow Tables must be maintained
    • Switches etc must be managed from controller, not locally
  • Input from applications must be managed and prioritised
    • Application APIs are non standard
    • Who gets precedence?
      • Load balancer vs. security tools when defining traffic flows?

SDN products do exist now.

  • Standards do exist
    • OpenFlow – maintained by Open Networking Foundation
  • Network devices (early days)
    • Open vSwitch
    • Some products from Brocade, Cisco, HP, IBM
  • Controllers (limited maturity)
    • Floodlight (open source)
    • Products from Big Switch Networks, Cisco, HP, NEC, NTT Data, VMware
  • Applications (often tied to specific controllers)
    • Radware and HP produce some security applications

Recommendations;

  • Do not overreact to SDN hype
  • Combine IT disciplines when implementing SDN
    • Don’t forget security!!
  • Determine how existing control requirements can be met with SDN
  • Examine how SDN impacts separation of duties
    • Some similar issues to vitalisation
  • Discuss SDN with your existing security vendors
  • Deploy SDN in a lab or test environment
    • PoC and understand fully before deploying

 

Overall this was an informative and fast paced talk.  As per the speakers recommendations, SDN is a very interesting technology, although it is still in the emerging phase with the majority of deployments currently being in testing or academia.  I wouldn’t yet recommend it for production Datacentre deployments, but I would recommend you become familiar with it, especially if you work in the networking or security fields.

 

K

Updates and what’s coming up..

As mentioned in my McAfee post, I’d meant to produce a quick update post covering recent goings on and what’s coming up as my blog updates have been a little erratic over the last few months..

Life has been pretty busy, on top of work, getting married and a couple of honeymoons I now officially have my Masters in ‘Distributed Systems and Networks’!  This is has been a lon while coming as I have been working on my part time MSc for the last 2.5-3 years outside of office hours.  Getting a ‘commended’ result was very pleasing as I expected just a standard ‘pass’.  OK so it’s not a distinction, but still good.

This has also meant my work with the Cloud Security Alliance has slide somewhat this year due to a lack of time.  I’m hoping to get back more involved with that now things will hopefully be slowing down slightly, well apart from the impending house move of course!

Regarding work, I’m still getting to work with some very interesting projects and great technologies some of which I’ll be writing about in upcoming posts.

Talking of upcoming posts, I am at the Gartner Security and Risk Management conference this week, and the Information Security Forum world annual congress in November, both of which should provide some interesting material to share.  I’ll likely try to follow a similar approach to previous conferences and mainly ‘live-blog’ from the talks as they happen.

K

Service Technology Symposium Day 2..

Today was the second day of the Service Technology Symposium.  As with yesterday I’ll use this post to review the keynote speeches and provide an overview of that day.  Where relevant further posts will follow, providing more details on some of the days talks.

As with the first day, the day started well with three interesting keynote speeches.

The first keynote was from the US FAA (Federal Aviation Administration) and was titled ‘SOA, Cloud and Services in the FAA airspace system’.  The talk covered the program that is under-way to simplify the very complex National Airspace System (NAS).  This is the ‘system of systems’ that manages all flights in the US and ensures the control and safety of all the planes and passengers.

The existing system is typical of many legacy systems.  It is complex, all point to point connections, hard to maintain, and even minor changes require large regression testing.

Thus a simplification program has been created to deliver SOA, web centric decoupled architecture.  To give an idea of the scale, this program is in two phases with phase one already largely delivered yet the program is scheduled to run through 2025!

as mentioned, the program is split into two segments to deliver capabilities and get buy in from the wider FAA.

–          Segment 1- implemented set of federated services, some messaging and SOA concepts, but no common infrastructure.

–          Segment 2 – common infrastructure – more agile, project effectively creating a message bus for the whole system.

The project team was aided by the creation of a Wiki, and COTS (commercial off the shelf) software repository.

They have also been asked to assess the cloud – there is a presidential directive to ‘do’ cloud computing.  They are performing a benefits analysis from operational to strategic.

Key considerations are that cloud must not compromise NAS,  and that security is paramount.

The cloud strategy is defined, and they are in the process of developing recommendations.  It is likely that the first systems to move to the cloud will be supporting and administrative systems, not key command and control systems.

The second keynote was about cloud interoperability and came from the Open Group.  Much of this was taken up with who the Open Group are and what they do.  Have a look at their website if you want to know more;

http://www.opengroup.org/

Outside of this, the main message of the talk was the need for improved interoperability between different cloud providers.  This would make it easier to host systems across vendors and also the ability of customers to change providers.

As a result improved interoperability would also aid wider cloud adoption – Interoperability is one of the keys to the success of the cloud!

The third keynote was titled ‘The API economy is here: Facebook, Twitter, Netflix and YOUR IT enterprise’.

API refers to Application Programming Interface, and a good description of what this refers to can be found on Wikipedia here;

http://en.wikipedia.org/wiki/Application_programming_interface

The focus of this keynote was that making APIs public and by making use of public APIs businesses can help drive innovation.

Web 2.0 – lots of technical innovation led to web 2.0, this then led to and enabled human innovation, via the game changer that is OPEN API.  Reusable components that can be used / accessed / built on by anyone.  Then add the massive, always on user base of smartphone users into the mix with more power in your pocket than needed to put Apollo on the moon.  The opportunity to capitalise on open APIs is huge.  As an example, there are currently over 1.1 million distinct apps across the various app stores!

Questions for you to consider;

1. How do you unlock human innovation in your business ecosystem?

–          Unlock the innovation of your employees – How can they innovate and be motivated?  How can they engage with the human API?

–          Unlock the potential of your business partner or channel sales community; e.g. Amazon web services – merchants produce, provide and fulfil goods orders, amazon provides the framework to enable this.

–          Unlock the potential of your customers; e.g. IFTTT  (If This Then That) who have put workflow in front of many of the available APIs on the internet.

2. How to expand and enhance your business ecosystem?

–          Control syndication of brand – e.g. facebook ‘like’ button – everyone knows what this is, every user has to use the same standard like button.

–          Expand breadth of system – e.g. Netflix  used to just be website video on demand, now available on many platforms – consoles, mobile, tablet, smart TV, PC etc.

–          Standardise experience – e.g. kindle or Netflix – can watch or read on one device, stop and pick up from the same place on another device.

–          Use APIs to create ‘gravity’ to attract customers to your service by integrating with services they already use – e.g. travel aggregation sites.

This one was a great talk with some useful thought points on how you can enhance your business through the use of open APIs.

On this day I fitted in 6 talks and one no show.

These were;

Talk 1 – Cloud computing’s impact on future enterprise architectures.  Some interesting points, but a bit stuck in the past with a lot of focus on ‘your data could be anywhere’ when most vendors now provide consumers the ability to ensure their data remains in a specific geographical region.  I wont be prioritising writing this one up so it may or may not appear in a future post.

Talk 2 – Using the cloud in the Enterprise Architecture.  This one should have been titled the Open Group and TOGAF with 5 minutes of cloud related comment at the end.  Another one that likely does not warrant a full write up.

Talk 3 – SOA environments are a big data problem.  This was a brief talk but with some interesting points around managing log files, using Splunk and ‘big data.  There will be a small write up on this one.

Talk 4 – Industry orientated cloud architecture (IOCA).  This talk covered the work Fulcrum have done with universities to standardise on their architectures and messaging systems to improve inter university communication and collaboration.  This was mostly marketing for the Fulcrum work and there wasn’t a lot of detail, this is unlikely to be written up further.

Talk 5  – Time for delivery: Developing successful business plans for cloud computing projects.  This was a great talk with a lot of useful content.  It was given by a Cap Gemini director so I expected it to be good.  There will definitely be a write up of this one.

Talk 6 – Big data and its impact on SOA.  This was another good, but fairly brief one, will get a short write up, possibly combined with Talk 3.

And there you have it that is the overview of day two of the conference.  Looks like I have several posts to write covering the more interesting talks from the two days!

As a conclusion, would I recommend this conference?  Its a definite maybe.  Some of the content was very good, some either too thin, or completely focussed on advertising a business or organisation.  The organisation was also terrible with 3 talks I planned to attend not happening and the audience totally left hanging rather than being informed the speaker hadn’t arrived.

So a mixed bag, which is a shame as there were some very good parts, and I managed to get 2 free books as well!

Stay tuned for some more detailed write ups.

K

Service Technology Symposium Day 1..

So yesterday was day one of the Service Technology Symposium.  This is a two day event covering various topics relating to cloud adoption, cloud architecture, SOA (Service Orientated Architecture) and big data.  As mentioned in my last post my focus has mostly been on the cloud and architecture related talks.

I’ll use this post to provide a high level overview of the day and talks I attended, further posts will dive more deeply into some of the topics covered.

The day started well with three interesting keynotes.

The first was from Gartner covering the impact of moving to the cloud and using SOA on architecture / design.  The main points of this talk were understanding the need to move to a decoupled architecture to get the most from any move to the cloud.  This was illustrated via the Any to Any to Any architecture paradigm where this is;

Any Device – Any Service – Any Data

Gartner identified a ‘nexus of forces’ driving this need to decouple system component;

–          Mobile – 24/7, personal, context aware, real time, consumer style

–          Social – Activity streams, Personal intelligence, group sourcing, group acting

–          Information – variety, velocity, volume, complexity

–          Cloud services

In order to achieve this, the following assumptions must be true; All components independent and autonomous, they can live anywhere (on premise or in cloud), applications must be decoupled from services and data.

They also highlighted the need for a deep understanding of the SOA principles.

The second keynote speech was from the European Space Agency on their journey from legacy applications and development practices to SOA this was titled ‘Vision to reality; SOA in space’.

They highlighted 4 drivers for their journey; Federation – Interoperability – Alignment to changing business needs / requirements (agility) – Reduce time and cost.

And identified realising these drivers using SOA, and standards as outlined below;

Federation – SOA, Standards

Interoperability – SOA, Standards

Alignment to business needs – SOA, Top Down and Bottom up

Reduce costs – Reuse; SOA, Incremental development

Overall this was an interesting talk and highlighted a real world success story for SOA in a very complex environment.

The third keynote was from NASA Earth Science Data Systems.  This provided an overview of their use of SOA, the cloud and semantic web technologies to aid their handling of ‘big data’ and complex calculations.  They have ended up with a globally diverse hybrid cloud solution.

As a result of their journey to their current architecture they found various things worthy of highlighting as considerations for anyone looking to move to the cloud;

–          Understand the long term costs of cloud storage (cloud more expensive for their needs and data volumes)

–          Computational performance needed for science – understand your computational needs and how they will be met

–          Data movement to and within the cloud – Data ingest, data distribution – how will your data get to and from the cloud and move within the cloud?

–          Process migration – moving processes geographically closer to the data

–          Consider hybrid cloud infrastructures, rather than pure cloud or pure on premises

–          Security –  always a consideration, they have worked with Amazon GovCloud to meet their requirements

To aid their move to SOA and the cloud, NASA created various working groups – such as – Data Stewardship, Interoperability, semantic technologies, standards, processes etc.

This has been successful for them so far, and currently NASA Earth Sciences make wide use of SOA, Semantic technologies and the cloud (esp. for big data).

The day then moved to 7 separate track of talks which turned out for me to be somewhat of a mixed bag.

Talk 1 was titled ‘Introducing the cloud computing design patterns catalogue’.  This is a relatively new project to create re-usable deign patterns for moving applications and systems to the cloud.  The project can be found here;

www.cloudpatterns.org

Unfortunately the intended speaker did not arrive so the talk was just a high level run through the site.  The project does look interesting and I’d recommend you take a look if you are involved in creating cloud based architectures.

The second talk was supposed to be ‘A cloud on-boarding strategy’ however the speaker did not turn up, and the organisers had no idea if he was coming or not so wasted a lot of peoples time.  While it’s outside of the organisers control if someone arrives or not, they should have been aware the speaker had not registered and let us know rather than the 45 minutes of is he, isn’t he, we just have no idea that ensued..

The third talk was supposed to be ‘developing successful business plans for cloud computing projects’.  This was again cancelled due to the speaker not arriving.

Talk 2 (talks numbered by my attendance) was a Gartner talk titled ‘Building Cloudy Services’.  This was an interesting talk that I’ll cover in more depth in a following post.

Talks three to five were also all interesting and will be covered in some more depth in their own posts.  They had the below titles;

Talk 3 was titled ‘HPC in the cloud’

Talk 4 was titled ‘Your security guy knows nothing’

Talk 5 was titled ‘Moving applications to the cloud’

The final talk of the day was titled ‘Integration, are you ready?’  This was however a somewhat misleading title.  This talk was from a cloud ESB vendor and was basically just an advertisement for their product and how great it was for integration. not generally about integration.  Not what you expect from a paid for event.  I’ll not mention their name other than to say they seem to have been inspired by a piece of peer to peer software.. Disappointing.

Overall, despite some organisational hiccups and a lack of vetting of at least one vendors presentation, day one was informative and interesting.  Look out for more detailed follow up posts over the next few days.

K