2016 Resolutions. The detail..

As promised, this follow up post will outline what I mean by each of the ‘resolutions’ I highlighted.

These were;

  1. Patch.  Everything.  On time.
  2. Protect your hosts.  Do application whitelisting.
  3. No admin rights for anyone who can access production data.
    1. No one with admin rights can access data.
  4. Role Based Access.
  5. Segregate your networks.
  6. If you create code, do solid code assurance.
  7. Test and Audit.

 

1. Patch.  Everything.  On time.

Sounds simple right?  It should be, but it seems it isn’t in many companies.  From my experience there seem to be 2 main drivers for so many companies failing this most basic of maintenance tasks;

  • Systems that must have almost 100% uptime, with no, or ill defined patching windows and processes.  This goes hand in hand with these solutions being incorrectly designed, if a system must always be ‘up’, design it in such a way that components can be taken out of service to be patched and maintained (or indeed if they fail).
  • Incorrect ownership and drivers for the patching process.  In many organisations it seems to be the security team who drive the need to apply ‘security’ patches.  This needs to be turned around.  Any system in production must be patched and maintained as part of BAU.  Systems / solutions should never be handed over into production without clear ownership and agreed processes for maintaining them, this must include patching.  Security then become an assurance function for this and their scans / checks confirm that the process is being correctly followed, plus of course highlighting any gaps.

If you see these issues in your organisation, make 2016 the year you address them, don’t be the next business in the headlines that is hacked due to systems that have not been patched for months!

2. Protect your hosts. Do application whitelisting.

With the ever more porous nature of our networks and perimeters, coupled with the insider threat and phishing etc. protecting our hosts is becoming ever more critical.

AV (Anti Virus / Malware) is not dead, but it also clearly is not enough on it’s own.  Indeed you will struggle to find any host protection product that only does AV these days.  Ensure all your hosts, both servers and user end points are running a solid, up to date and centrally managed host protection solution (or solutions) providing anti malware, host IPS (Intrusion Prevention System), host fire-walling and ideally FIM (File Integrity Monitoring).

I’m gradually trying to change peoples language from AV / Anti Malware to Host Protection as I think this covers both the requirement, and many of the solutions far better.

In addition to this I would strongly recommend the use of an application whitelisting solution, as this can provide a key defence in preventing any unapproved (or malicious) software from running.  As well as preventing malware, these solutions have the added benefit of helping to maintain a known environment, running only known and approved software.

3. No admin rights for anyone who can access production data.  No one with admin rights can access data.

This is something I am currently championing as a great way to reduce the risk to your organisations data.

This may be harder for very small organisations, but for medium and larger ones, think about the different roles your teams have.

How many people who need to access key data, e.g. via production applications, need to have administrative rights on their end user systems, or on the production systems?

Conversely, how many of the system administrators who maintain systems and databases etc. need access to the actual production data in order to perform their duties?

One of the most common ways malware gets a hold is via users with administrative privileges.  So if we prevent any user with these elevated privileges from having access to data, if they or their systems are compromised, the risk of data loss or of damage to data integrity is massively reduced.

While it may seem a substantial challenge to prevent administrators from having access to data, there are at least a couple of obvious options.

Some host protection solutions claim to have separation of duties capabilities that control who can access data outside of just relying on O/S (Operating System) permissions.  I have not tested these though.

Various companies offer transparent encryption solutions that have their own set of ACLs managed independently from the O/S permissions.  These can be managed by for example the security team to ensure only approved business users can access data, while still permitting administrators to perform their role.

4. Role Based Access.

This one should hopefully require minimal explanation.  Each type of user should have a defined role.  This should have associated system permissions allowing them to access data and perform the tasks required to perform their role.

This ensures people should only be able to access data they are supposed to, and not data they should not.  The principle of ‘least privilege’ must be adhered to when creating roles and applying permissions to ensure everyone can perform their duties, but not carry out tasks outside of those that are approved.

This can be backed up by using some form of IAM (Identity and Access Management) solution.  Although be careful about over complicating this if your organisation is not large enough and complex enough to warrant a cumbersome IAM solution.

5. Segregate your networks.

In addition to external firewalls preventing access from outside your organisation, internal networks must be segregated as well.

When designing your networks, think carefully about which systems need to to talk to each other, and on which ports.

For example, do your end user systems all need to access all of the production environments?  Or do some of your teams need access to some production systems and only on specific application ports?

This point can be linked with the host protection one above as host firewalls can be used to further prevent unauthorised access to systems.  Most servers do not need to connect to all other servers in the same zone as them.  Host firewalls can be used to restrict servers from connecting to other servers they do not need to, without requiring an overly complex network design.

Strong network and system segregation will help prevent the spread of any malware or malicious users within the organisations’ IT estate, and thus help ensure data is not removed or changed.

6. If you create code, do solid code assurance.

The OWASP top 10 has changed little for several years (look it up if you are not familiar).  Applications consistently have known and well understood vulnerabilities.  These same vulnerabilities are consistently exploited by malicious people.

If you create applications ensure the code goes through rigorous manual and automated code reviews.  Ensure the application is thoroughly tested against not just the businesses functional requirements, but also the non functional requirements from the security team.

Finally, before the application or substantial change goes live ensure it is penetration / security tested by experts.

Performing all these checks does not guarantee your application cannot be hacked, but it will ensure that it is not an easy target.  Ideally these steps should be key and non negotiable parts of your organisations SDLC (Software / System Development Life Cycle).

7. Test and Audit.

Once you have the basics in place, you need to ensure they are being successfully applied.  This is where the assurance part of the security teams role comes into play.  Whether it is supporting the SDLC processes or scanning systems for outstanding patches, the security team can, and must, assure that the agreed tasks and processes are being adhered to.

This step is critical to the ongoing success of the previous items, the effort and expertise required to complete it should not be under estimated.

 

Hopefully this has supplied some clarity and context to my previous post and made my intent clear.  Let me know.

In some following posts I’ll start talking about some of the really fun and intelligent things you can start doing once the basics are in place!

K

2016 Security Resolutions

It’s that time of year again, everyone will be writing their resolutions and predictions for the year.

Will we have more of the same?  More APTs?  More nation state sponsored breaches?  DDoS?  Increased application attacks?  More mobile malware?

Probably.

We all know there will be hackers, criminals, hactivists, malicious insiders, nation state actors etc.  We also all know there will be application attacks, malware, APTs, DDoS etc.

Rather than write another predictions article I thought I’d try a slightly different tack an cover the key things I think every organisation MUST do if they are not already.

  1. Patch.  Everything.  On time.
  2. Protect your hosts.  Do application whitelisting.
  3. No admin rights for anyone who can access production data.
    1. No one with admin rights can access data.
  4. Role Based Access.
  5. Segregate your networks.
  6. If you create code, do solid code assurance.
  7. Test and Audit.

Get the basics right!  There are of course many other things to focus on, but hopefully the general idea is clear.  Organisations need to be mindful of throwing too much time and money into the latest and greatest APT protection, behavioural analysis, and overcomplicated solutions to simple problems.  Getting the basics right must be the first priority.

Remember, it is extremely likely that attackers will go after the low hanging fruit.  Even if they are  directly targeting your organisation, it is un-patched systems, people with admin rights and unprotected hosts or applications that will be attacked first.  Only after these avenues have failed will they resort to more challenging and advanced attacks.

I’ll use a follow up post to cover the above point in more detail, but wanted to get these initial thoughts up.

What do you think?  How is your organisation doing with the basics?  Do you spend too much time on new, sexy security when you don’t have the basics covered?

Happy new year all!

K

 

 

The blessing and curse of PCI-DSS

This is a post I have been meaning to write for some while, as I have been pondering the benefits vs. challenges of various standards / legislation.  I’m not thinking about challenges of implementing PCI-DSS (Payment Card Industry – Digital Security Standard), more the challenges of working in environments where compliance trumps security.  As per the title, this post will focus on PCI-DSS, but I think it’s likely most of the issues will apply to various standards / regulations that are subject to compliance audits of some sort.

On the positive (blessing) side PCI-DSS is mostly a good standard, enforcing things like encryption in transit over public networks, separation of duties, minimising access to card data etc.  It has forced some level of security practice onto companies that may previously have had relatively lax controls in place.  The standard has also considerably raised the profile of security / meeting security requirements within many organisations.

On the negative (curse) side PCI-DSS is seen by many organisations as the be all and end all of security, despite the fact that is it the bare minimum you have to achieve in order to be permitted to handle / process card date.  In addition it focuses almost solely on card data, ignoring concerns around things like PII (Personally Identifiable Information).  This leads to a focus on ‘box-ticking’ compliance, rather than designing secure systems from the ground up which would by definition be compliant with most (any?) sensible standards.

With the movement towards a more continuos monitoring style proposed for the latest release of PCI-DSS the focus on obtaining compliance yearly may be something we are moving away from.  However this will do little to address companies attitudes towards broader security and the belief that obtaining and maintaining PCI-DSS compliance means systems are completely secure.

On balance I think standards / regulations like PCI-DSS are a good thing as they force companies to at least achieve some minimal levels of security.  The challenge for security professionals is to get project teams and the wider business to accept that these standards are the bare minimums.  Considerably more secure designs / solutions need to be implemented if we want to actually meet our duty of care to our customers whose data we hold and process.

What are your thoughts?

How successful have you been in moving to security being ‘front and centre’ and compliance with regulations being a by product of this, rather than the focus being on compliance rather than security?

K

 

 

ISF congress 2013 Post 1: Defence evasion modelling – Fault correlation and bypassing the cyber kill chain

Well I am at the ISF (Information Security Forum) annual congress for the next couple of days.  As usual I’ll blog notes and some comments from the talks I listen to, and where possible share them ‘live’ and as is.

Presentation by Stefan Frei and Francisco Artes from NSS Labs.

 

The risk is much larger then people thought.  It is more like the 800 pound ‘cyber gorilla’ than the chimpanzee.. And to make things worse it is a whole field of these ‘cyber gorillas’.

 

It’s not just about digital data theft;

  • Destruction / alteration of digital assets
  • Interruption to applications, systems and customer resources
  • Single points of data
  • AV vendors only focus on defending mass market applications
  • Geo location – access from anywhere for users and hackers

 

Do we understand our defences?

  • Network – Firewall, IPS (Intrusion Prevention System), WAF (Web Application Firewall), NGFW (Next Generation Firewall), Anti APT (Advanced Persistent Threat) etc. etc.
  • Host – AV (Anti Virus), Host FW, Host IPS, Host zero day, application controls etc. etc.
  • Different vendors often used due to perception that 2 vendors

 

What about indirect attacks, such as browser and application based?

 

How effective are your defences?

 

How do we maintain the balance between security and usability?

How do we assess the security of our solutions?

How do we report on this with metrics that are meaningful to the board?

 

Threat modelling can be a useful tool here.

 

Live modelling solutions (such as those done by NSS labs) can be used to model differnect tools from different vendors in an environment broadly similar to yours; (NSS example)

 

  • Pick your applications and operating systems
  • Pick your broad network design
  • Pick the security solutions and where they are placed.

 

Devices each tested with >2000 exploits, thus when you choose different devices you can see where the exploits would be caught or missed, so for example you could layer brand X NGFW, with brand Y IPS, and brand Z AV.  The ‘live’ threat model would then map the exploits that each device missed, so you can see if any would pass all the layers in your security.

All tests were done with the devices tuned as per manufacturers recommendations.

  • For IPS the vendors had experts tune them, this lead to a 60-85% increase in IPS performance.  This point is very interesting outside to this talk – IPS devices MUST be tuned and maintained for them to deliver value and protection.  Do you regularly tune and maintain IDS / IPS devices in your environment?

 

Report / live threat modelling also differentiates between automated attacks vs. hand crafted ones.  This highlights how many attacks could relatively easily be launched by anyone with basic skills in free tools such as Metasploit.  This raises the question why security tool vendors can’t at least download exploit tool kits and their updates to ensure their tools can at least prevent the available pre-packaged attacks!

 

This is definitely a useful tool, and whether NSS or similar I can recommend you undertake some detailed threat modelling of your environment.  This type of service allows you to perform much more ‘real’ technical threat modelling rather than just doing theoretical attack scenarios which is as far as most threat modelling exercises seem to go.

 

What is the threat environment?

Many experts writing tools and exploits.

A huge number of people with limited skills utilising free and paid for tools created by the exports – this increases the threat exponentially – anyone can try the free tools, anyone with even limited funds can purchase the paid for tools (often around $250).

 

The maturing threat landscape;

there is now a thriving market for underground hacking / attack tools.  This has matured and now offers regularly patched software with patching cycles, new exploits regularly added, and even full support with email and sometimes phone based support desks.

The vendors of these hacking tools even offer guarantees around how long exploits will work for and evade security tools.

These are often referred to as Crimeware Kits.

 

In the tests by NSS labs, no device detected all exploits available in these tools, or in the free tools.

 

This is the continuing problem for businesses and the security industry – they are always playing catch up and creating tools / solutions to deal with known threats, rarely the unknown threats.

 

Another interesting finding was in a recent test of NGFWs where combinations of two vendors were used in serial, no one pair prevented all exploits tested.  However careful and planned pairing does improve security.  However this needs to be tested and planned, choosing two vendors at random is the wrong way to do this.  How many businesses currently have separate FW or NGFW vendors at different layers of the network?  How many of these actually researched the exploits that get through these and chose the solutions for the maximum protection vs. choosing two different vendors without doing this research?

 

Security vendors will always be playing catch up, however threat modelling can help ensure you choose the best ones for your environment.

Threat modelling will also help choose the best investments to improve security.

As an example a business who worked with NSS was about to invest >$300M on NGFWs across their environment.  The threat modelling highlighted that this wouldn’t add a huge amount of security due to a Java issue on all their sites and machines.  They could invest (and did) more like £3M on migrating the app to HTML5 and removing Java from their environment.  This created a much more secure environment for a mush smaller investment.

 

Threat modelling can also include geo-loaction and which vendors work best in which locations as well as just looking at the technologies.

 

Final point was a reminder that as no tools will prevent everything, see must assume we have been ‘owned’ (breached) and act accordingly.  This must not be an exception process, we must search for and respond to breaches as part of our security business as usual process.

 

If you are not performing live threat modelling, I’d highly recommend you start as this is a great way of assessing your current security posture, and also very useful for planning you next security investments to ensure they provide the greatest value and also measurably improve your security posture.

Overall, this was a very informative talk that while demonstrating their product / service managed the stay fairly clear of too much vendor speak and promotion while still highlighting the clear benefits of ‘live threat modelling.

K

Gartner Security and Risk Management conference – Software Defined Networking

This was an introductory talk around Software Defined Networking (SDN) and some of it’s security implications.

What is it?

  • Decoupling the control pane from the data plane and centralising logical controls
  • Communication between network devices and SDN controllers is with both open and proprietary protocols currently – no single standard..
  • SDN controller supports open interface to allow external programability of the environment

– Controller tells each node how to route, vs. current where each node makes it’s own routing decisions.

 How do I enforce network security in an SDN environment?

  • Switch as the Policy enforcement point
    • Switch tells controller it’s seen traffic with certain flow characteristics, Flow controller tells it what to do with the flow, and this information is cached in the local flow table for a specified time.  Another flow arrives and this one is not permitted, so the controller tells the switch to just drop the packets – switch effectively becomes a stageful firewall.
    • Existing controls such as DLP, Firewalls, Proxy servers etc. can all be used with SDN –
      • e.g. someone tries to connect to the internet – flow controller instructs switch to send traffic to the firewall / IPS / DLP server etc.
      • e.g. sending email – no matter where it’s going flow says first point is DLP, then firewall, then onto destination
      • This means devices no longer need to be inline – they can be anywhere on the network.  Flow controller just needs to know where to send certain traffic types!
    • Incoming flows can be treated in the same way
      • Something changes – such that it looks like DDoS – traffic can be routed to the DDoS protection device(s)

What risks does SDN introduce?

  • Risk is aggregated in the controller
    • Malicious or accidental changes could remove some or all of the security protections
  • Integrity of of the Flow Tables must be maintained
    • Switches etc must be managed from controller, not locally
  • Input from applications must be managed and prioritised
    • Application APIs are non standard
    • Who gets precedence?
      • Load balancer vs. security tools when defining traffic flows?

SDN products do exist now.

  • Standards do exist
    • OpenFlow – maintained by Open Networking Foundation
  • Network devices (early days)
    • Open vSwitch
    • Some products from Brocade, Cisco, HP, IBM
  • Controllers (limited maturity)
    • Floodlight (open source)
    • Products from Big Switch Networks, Cisco, HP, NEC, NTT Data, VMware
  • Applications (often tied to specific controllers)
    • Radware and HP produce some security applications

Recommendations;

  • Do not overreact to SDN hype
  • Combine IT disciplines when implementing SDN
    • Don’t forget security!!
  • Determine how existing control requirements can be met with SDN
  • Examine how SDN impacts separation of duties
    • Some similar issues to vitalisation
  • Discuss SDN with your existing security vendors
  • Deploy SDN in a lab or test environment
    • PoC and understand fully before deploying

 

Overall this was an informative and fast paced talk.  As per the speakers recommendations, SDN is a very interesting technology, although it is still in the emerging phase with the majority of deployments currently being in testing or academia.  I wouldn’t yet recommend it for production Datacentre deployments, but I would recommend you become familiar with it, especially if you work in the networking or security fields.

 

K

Splunk Live!

I attended the Splunk Live! London event last Thursday.  I am currently in the process of assessing Splunk and it’s suitability as a security SIEM (Security Information and Event Management) tool in addition to general data collection and correlation tool.  During the day I made various notes that I thought I would share, I’ll warn you up front that these are relatively unformatted as they were just taken during the talks on the day.

Before I cover off the day, I should highlight that I use the term SIEM to relate to the process of Security Information and Event Management, NOT SIEM ‘tools’.  Most traditional tools labelled as SIEM as inflexible, do not scale in this world of ‘big data’ and are only usable by the security team.  This for me is a huge issue and waste of resources.  SIEM as a process is performed by security teams every day and will continue to be performed even when using whatever big data tool of choice.

The background to my investigating Splunk is that I believe a business should have a single log and data collection and correlation system that gets literally everything from applications to servers to networking equipement to security tools logs / events etc.  This then means that everyone from Ops to application support, to the business to security can use the same tool and be ensured a view encompassing the entire environment.  Each set of users would have different access rights and custom dashboards in order for them to perform their roles.

From a security perspective this is the only way to ensure the complete view that is required to look for anomalies and detect intelligent APT (Advanced Persistent Threat) type attacks.

Having a single tool also has obvious efficiency, management and economies of scale benefits over trying to run multiple largely overlapping tools.

Onto the notes from the day;

Volume – Velocity – Variety – Variability = Big Data

Machine generated data is one of the fastest growing, most complex and most valuable segments of big data..

 

Real time business insights

Operational visibility

Proactive monitoring

Search and investigation

Enables move from ‘break fix’ to real time operations insight (including security operations). 

GUI to create dashboards – write quires and select how to have them displayed (list, graph, pie chart etc.) can move things around on dashboard with drag and drop.

Dev tools – REST API, SDKs in multiple languages.

More data in = more value.

My key goal for the organisation – One log management / correlation solution – ALL data.  Ops (apps, inf, networks etc.) and Security (inc PCI) all use same tool with different dashboards / screens and where required different underlying permissions.

Many screens and dashboards available free (some like PCI and Security cost)  dashboards look and feel helps users feel at home and get started quickly – e.g. VM dashboards look and feel similar to VMware interface.

another example – windows dashboard – created by windows admins, not splunk – all the details they think you need.

Exchange dashboard – includes many exchange details around message rates and volumes etc, also includes things like outbound email reputation

VMware – can go down to specific guests and resource use, as well as host details. (file use, CPU use, men use etc.)

Can pivot between data from VMware and email etc. to troubleshoot the cause of issues.

These are free – download from spunkbase

Can all be edited if not exactly what you need, but are at least a great start..

Developers – from tool to platform – can both support development environments and be used to help teach developers how to create more useful log file data.

Security and Compliance – threat levels growing exponentially – cloud, big data, mobile etc. – the unknown is what is dangerous – move from known threats to unknown threats..

Wired – the internet of things has arrived, and so have massive security threats

Security operations centre, Security analytics, security managers and execs

  • Enterprise Security App – security posture, incident review, access, endpoint, network, identity, audit, resources..

Look for anomalies -things someone / something has not done before

  • can do things like create tasks, take ownership of tasks, report progress etc.
  • When drilling down on issues has contextual pivot points – e.g right click on a host name and asset search, google search, drill down into more details etc.
  • Even though costs, like all dashboards is completely configurable.

Splunk App for PCI compliance – Continuous real time monitoring of PCI compliance posture, Support for all PCI requirements (12 areas), State of PCI compliance over time, Instant visibility on compliance status – traffic lights for each area – click to drill down to details.

  • Security prioritisation of in scoop assets
  • Removes much of the manual work from PCI audits / reporting

Application management dashboard

  • spunk can do math – what is average stock price / how many users on web site in last 15 minutes etc.
  • Real time reporting on impact of marketing emails / product launches and changes etc.
  • for WP – reporting on transaction times, points of latency etc – enable focus on slow or resource intensive processes!
  • hours / days / weeks to create whole new dashboards, not months.

Links with Google earth – can show all customer locations on a map – are we getting connections from locations we don’t support, where / what are our busiest connections / regions.

Industrial data and the internet of things; airlines, medical informatics (electronic health records – mobile, wireless, digital, available anywhere to the right people – were used to putting pads down, so didn’t get charged – spunk identified this).

Small data, big data problem (e.g. not all big data is a actually a massive data volume, but may be complex, rapidly changing, difficult to understand and correlate between multiple disparate systems).

Scale examples;

Barclays – 10TB security data year.

HPC – 10TB day

Trading 10TB day

VM – >10TB year

All via splunk..

DataShift – Social networking ‘ETL’ with spunk. ~10TB new data today

Afternoon sessions – Advanced(isn) spunk..

– Can create lookup / conversion tables so log data can be turned into readable data (e.g. HTTP error codes read as page not found etc. rather than a number)  This can either be automatic, or as a reference table you pipe logs through when searching.

– As well as GUI for editing dashboards, you can also directly edit the underlying XML

– Can have lots of saved searches, should organise them into headings or dashboards by use / application or similar for ease of use.

– Simple and advanced XML – simple has menus, drop downs, drag and drop etc.  Advanced required you to write XML, but is more powerful.  Advice is to start in simple XML, get layout, pictures etc sorted, then convert to advanced XML if any more advanced features are require.

– Doughnut chart – like a pie chart with inside and outside layers – good if you have a high level grouping, and a lower level grouping – can have both on one chart.

– Can do a rolling, constantly updating dashboard – built in real time option to refresh / show figures for every xx minutes.

High Availability

  • replicate indexes
    • gives HA, gives fidelity, may speed up searches

Advanced admin course;

http://www.splunk.com/view/SPCAAAGNF

Report acceleration

  • can accelerate a qualifying report – more efficiently run large reports covering wide date ranges
  • must be in smart or fast mode

Lots of free and up to date training is available via the Splunk website.

Splunk for security

Investigation / forensics – Correlation, fast to root cause, look for APTs, investigate and understand false positives

Splunk can have all original data – use as your SIEM – rather than just sending a subset of data to your SIEM

Unknown threats – APT / malicious insider

  • “normal” user and machine data – includes “unknown” threats
  • “security” data or alerts from security products etc.  “known” security issues..   Misses many issues

Add context  – increases value and chance of detecting threats.  Business understanding and context are key to increasing value.

Get both host and network based data to have best chance of detecting attacks

Identify threat activity

  • what is the modus operandi
  • who / what are most critical people and data assets
  • what patterns and correlations of ‘weak’ signals in normal IT activities would represent abnormal activity?
  • what in my environment is different / new / changed
  • what deviations are there from the norm

Sample fingerprints of an Advanced Threat.

Remediate and Automate

  • Where else do I see the indicators of compromise
  • Remediate infected systems
  • Fix weaknesses, including employee education
  • Turn the Indicators of Compromise into real time search to detect future threats

– Splunk Enterprise Security (2.4 released next week – 20 something april)

– Predefined normalisation and correlation, extensible and customisable

– F5, Juniper, Cisco, Fireeye etc all partners and integrated well into Splunk.

Move away from talking about security events to all events – especially with advanced threats, any event can be a security event..

I have a further meeting with some of the Splunk security specialists tomorrow so will provide a further update later.

Overall Splunk seems to tick a lot of boxes and looks certainly taps into the explosion of data we must correlate and understand in order to maintain our environment and spot subtle, intelligent security threats.

K

 

Cloud Security Alliance Congress Orlando 2012 pt5 – closing keynote

Closing Keynote – State of the Union

Chris Hoff, who is the author of the Rational Survivability blog, gave a great closing keynote covering the last few years via his previous presentation titles and content.  I can recommend reading / viewing the mentioned presentations.  This was followed by a brief overview of current issues and trends, and then coverage of upcoming / very new areas of focus we all need to be aware of.

What’s happened?

2008 – Platforms dictate capabilities (security) and operations – Read ‘The four horsemen of the virtualisation security apocalypse’

–          Monolithic security vendor virtual appliances are the virtualisation version of the UTM argument.

–          Virtualised security can seriously impact performance, resiliency and scalability

–          Replicating many highly-available security applications and network topologies in virtual switches don’t work

–          Virtualising security will not save you money.  It will cost you more.

2009 – Realities of hybrid cloud, interesting attacks, changing security models – Read – ‘The frogs who desired a king – A virtualisation and cloud computing fable set to interpretive dance’

–          Cloud is actually something to be really happy about; people who would not ordinarily think about security are doing so

–          While we’re scrambling to adapt, we’re turning over rocks and shining lights in dark crevices

–          Sure bad things will happen, but really smart people are engaging in meaningful dialogue and starting to work on solutions

–          You’ll find that much of what you have works.. Perhaps just differently; setting expectations is critical

2010 – Turtles all the way down – Read – ‘Cloudifornication – Indiscriminate information intercourse involving internet infrastructure’

–          Security becomes a question of scale

–          Attacks on and attacks using large-scale public cloud providers are coming and cloud services are already being used for $evil

–          Hybrid security solutions (and more of them) are needed

–          Service transparency, assurance and auditability is key

–          Providers have the chance to make security better.  Be transparent.

2010 – Public cloud platform dependencies will liberate of kill you – Read ‘Cloudinomicon – Idempotent infrastructure, survivable systems and the return of information centricity’

–          Not all cloud offerings are created equal or for the same reasons

–          Differentiation based upon PLATFORM: Networking security, Transparency/visibility and forensics

–          Apps in clouds can most definitely be deployed as securely or even more securely than in an enterprise

–          However this often required profound architectural, operational, technology, security and compliance model changes

–          What makes cloud platforms tick matters in the long term

 2011 – Security Automation FTW – Read ‘Commode computing – from squat pots to cloud bots – better waste management through security automation’

–          Don’t just sit there: it wont automate itself

–          Recognise, accept and move on: The DMZ design pattern is dead

–          Make use of existing / new services: you don’t have to do it all yourself

–          Demand and use programmatic interfaces from security solutions

–          Encourage networks / security wonks to use tools / learn to program / use automation

–          Squash audit inefficiency and maximise efficacy

–          DevOps and security need to make nice

–          AppSec and SDLC are huge

–          Automate data protection

2012 – Keepin it real with respect to challenges and changing landscape – Read – ‘The 7 dirty words of Cloud Security’

–          Scalability

–          Portability

–          Fungibility

–          Compliance

–          Cost

–          Manageability

–          Trust

2012 – DevOps, continual deployment, platforms –  Read – ‘Sh*t my Cloud evangelist says …Just not to my CSO’

–          [Missing] Instrumentation that is inclusive of security

–          [Missing] Intelligence and context shared between infrastructure and application layers

–          [Missing] Maturity of “Automation Mechanics” and frameworks

–          [Missing} Standard interfaces, precise syntactical representation of elemental security constructs

–          [Missing] An operational methodology that ensures and commone understanding of outcomes and ‘agile’ culture in general

–          [Missing] Sanitary application security practices

What’s happening?

–          Mobility, Internet of Things, Consumerisation

–          New application architecture and platforms (Azure, Cloud foundry, NoSQL, Cassandra, Hadoop etc.)

–          APIs – everything connected by APIs

–          DevOps – Need to understand how this works and who owns security

–          Programmatic (virtualised) Networking and SDN (Software Defined Network)

–          Advanced adversaries and tactics (APTs, organised crime, nation states, using cloud and virtualisation benefits to attack us etc.)

What’s coming?

–          Security analytics and intelligence – security data is becoming ‘big data – Volume. Velocity. Variety. Veracity.

–          AppSec Reloaded – APIs. REST. PaaS. DevOps. – On top of all the existing AppSec issues – how long has the OWASP top threats remained largely unchanged??

–          Security as a Service 2.0 – “Cloud.” SDN. Virtualised.

–          Offensive security – Cyber. Cyber. Cyber. Cyber…  Instead of just being purely defensive, do things more proactive – not necessarily actually attacking them, can mean deceiving them to honeypots / honynets, fingerprinting the attack, tracking back the connections etc. all the way up to actually striking back.

Summary;

–          Public clouds are marching onward; Platforms are maturing… Getting simpler to deploy and operate and the platform level, but have heavy impact on application architecture

–          Private clouds are getting more complex(as expected) and the use case differences between the two are obvious; more exposed infrastructure connected knobs and dials

–          Hybrid clouds are emerging, hypervisors commoditised and orchestration / provisioning systems differentiate as ecosystem and corporate interests emerge

–          Mobility (workload and consuming devices) and APIs are everywhere

–          Network models are being abstracted even further (Physical > Virtual > Overlay) and that creates more ‘simplexity’

–          Application and information ‘ETL sprawl’ is a force to be reckoned with

–          Security is getting much more interesting!

This was a great wrap up highlighting the last few years’ issues, how many of these have we really fixed?  Along with where we are now, and a nice wrap up of what’s coming up.  Are you up to speed with all the current and outstanding issues you need to be aware of?  How prepared are you and your organisation for what’s coming up?  Don’t be like the 3 monkeys.. 😉

While the picture is complex and we have loads of work to do, Chris’s last point aptly sums up why I love security and working in the security field!

Lastly, have a look at Chris’s blog; http://www.rationalsurvivability.com/blog/ which has loads of interesting content.

K