ISF congress 2013 Post 1: Defence evasion modelling – Fault correlation and bypassing the cyber kill chain

Well I am at the ISF (Information Security Forum) annual congress for the next couple of days.  As usual I’ll blog notes and some comments from the talks I listen to, and where possible share them ‘live’ and as is.

Presentation by Stefan Frei and Francisco Artes from NSS Labs.

 

The risk is much larger then people thought.  It is more like the 800 pound ‘cyber gorilla’ than the chimpanzee.. And to make things worse it is a whole field of these ‘cyber gorillas’.

 

It’s not just about digital data theft;

  • Destruction / alteration of digital assets
  • Interruption to applications, systems and customer resources
  • Single points of data
  • AV vendors only focus on defending mass market applications
  • Geo location – access from anywhere for users and hackers

 

Do we understand our defences?

  • Network – Firewall, IPS (Intrusion Prevention System), WAF (Web Application Firewall), NGFW (Next Generation Firewall), Anti APT (Advanced Persistent Threat) etc. etc.
  • Host – AV (Anti Virus), Host FW, Host IPS, Host zero day, application controls etc. etc.
  • Different vendors often used due to perception that 2 vendors

 

What about indirect attacks, such as browser and application based?

 

How effective are your defences?

 

How do we maintain the balance between security and usability?

How do we assess the security of our solutions?

How do we report on this with metrics that are meaningful to the board?

 

Threat modelling can be a useful tool here.

 

Live modelling solutions (such as those done by NSS labs) can be used to model differnect tools from different vendors in an environment broadly similar to yours; (NSS example)

 

  • Pick your applications and operating systems
  • Pick your broad network design
  • Pick the security solutions and where they are placed.

 

Devices each tested with >2000 exploits, thus when you choose different devices you can see where the exploits would be caught or missed, so for example you could layer brand X NGFW, with brand Y IPS, and brand Z AV.  The ‘live’ threat model would then map the exploits that each device missed, so you can see if any would pass all the layers in your security.

All tests were done with the devices tuned as per manufacturers recommendations.

  • For IPS the vendors had experts tune them, this lead to a 60-85% increase in IPS performance.  This point is very interesting outside to this talk – IPS devices MUST be tuned and maintained for them to deliver value and protection.  Do you regularly tune and maintain IDS / IPS devices in your environment?

 

Report / live threat modelling also differentiates between automated attacks vs. hand crafted ones.  This highlights how many attacks could relatively easily be launched by anyone with basic skills in free tools such as Metasploit.  This raises the question why security tool vendors can’t at least download exploit tool kits and their updates to ensure their tools can at least prevent the available pre-packaged attacks!

 

This is definitely a useful tool, and whether NSS or similar I can recommend you undertake some detailed threat modelling of your environment.  This type of service allows you to perform much more ‘real’ technical threat modelling rather than just doing theoretical attack scenarios which is as far as most threat modelling exercises seem to go.

 

What is the threat environment?

Many experts writing tools and exploits.

A huge number of people with limited skills utilising free and paid for tools created by the exports – this increases the threat exponentially – anyone can try the free tools, anyone with even limited funds can purchase the paid for tools (often around $250).

 

The maturing threat landscape;

there is now a thriving market for underground hacking / attack tools.  This has matured and now offers regularly patched software with patching cycles, new exploits regularly added, and even full support with email and sometimes phone based support desks.

The vendors of these hacking tools even offer guarantees around how long exploits will work for and evade security tools.

These are often referred to as Crimeware Kits.

 

In the tests by NSS labs, no device detected all exploits available in these tools, or in the free tools.

 

This is the continuing problem for businesses and the security industry – they are always playing catch up and creating tools / solutions to deal with known threats, rarely the unknown threats.

 

Another interesting finding was in a recent test of NGFWs where combinations of two vendors were used in serial, no one pair prevented all exploits tested.  However careful and planned pairing does improve security.  However this needs to be tested and planned, choosing two vendors at random is the wrong way to do this.  How many businesses currently have separate FW or NGFW vendors at different layers of the network?  How many of these actually researched the exploits that get through these and chose the solutions for the maximum protection vs. choosing two different vendors without doing this research?

 

Security vendors will always be playing catch up, however threat modelling can help ensure you choose the best ones for your environment.

Threat modelling will also help choose the best investments to improve security.

As an example a business who worked with NSS was about to invest >$300M on NGFWs across their environment.  The threat modelling highlighted that this wouldn’t add a huge amount of security due to a Java issue on all their sites and machines.  They could invest (and did) more like £3M on migrating the app to HTML5 and removing Java from their environment.  This created a much more secure environment for a mush smaller investment.

 

Threat modelling can also include geo-loaction and which vendors work best in which locations as well as just looking at the technologies.

 

Final point was a reminder that as no tools will prevent everything, see must assume we have been ‘owned’ (breached) and act accordingly.  This must not be an exception process, we must search for and respond to breaches as part of our security business as usual process.

 

If you are not performing live threat modelling, I’d highly recommend you start as this is a great way of assessing your current security posture, and also very useful for planning you next security investments to ensure they provide the greatest value and also measurably improve your security posture.

Overall, this was a very informative talk that while demonstrating their product / service managed the stay fairly clear of too much vendor speak and promotion while still highlighting the clear benefits of ‘live threat modelling.

K

Web Application Firewalls

This talk from Gartner covered WAFs, their functionality, if they are required and possible alternatives;

Software security is improving but hasn’t caught up with the threat landscape.

Attackers have Motivation, Times, Expertise and many targets.

Software security can be improved by better education, QA, SDLC, Frameworks and tools.

  • This helps close the gap, but it still remains
  • Many legacy applications or components will exist for a long time

 Defence in depth approach is required to protect applications;

  • Firewall – allows or blocks traffic based on IP and port – positive security model; Deny all traffic unless explicitly allowed
  • NIPS (Network based Intrusion Prevention System) – Negative security model: Signatures and protocol validation
  • WAF – Identifies and blocks application layer attacks
    • Negative security model – Fixed rules, Blacklist known bad, expert deployment
    • Positive security model – Automatic application behaviour learning, whitelist known good, stratighf forward deployment model
    • Passively block or actively modify traffic to prevent specific attacks

Additional functionality over other network security tools found in many WAFs;

  • Authentication and authorisation
  • ADC functionality
  • SSL termination
  • Anti-scraping
  • Threat intelligence
  • Content inspection, data masking, and DLP

 Differentiators;

  • All have basic signatures and filtering
  • Differ in;
    • Level of granularity
      • policies per application
      • policies per url
      • fully scriptable rule engine vs. high level settings
    • Positive model capabilities
    • Additional functionality
    • Deployment methods

Interest in WAF from a business risk perspective is increasing: 

  • Protects against identified vulnerabilities: Buys time as a quick fix, and provides long-term mitigation for legacy Web applications.
  • Protects against generic classes of attacks, such as SQL injection and brute force.
  • Protects against attacks targeted at your application: Requires active response and granular policy settings.

Also, do not underestimate the benefits of the extras such as performance, caching, authentication..

What are the latest developments in WAF technology?

  • Evolution in data interchange and protocol standard support, such as JSON, XML, GWT, HTML5, SPDY, IPv6
  • User and device validation and integration with Web fraud prevention:
    • True source/real IP identification proxies
    • Geolocation and reputation services
    • Injection/Execution of code for user validation and rudimentary fraud detection
  • Increasing support for Web vulnerability scanners (DAST): “Virtual patching”
  • Support for virtualisation and SaaS Web applications, and cloud delivery options for WAF
  • Improved layer 7 DDoS protection

WAFs, are they viable for the future?

Yes..

  • They provide application layer functionality largely unavailable in many other network based defences.  They should be considered as part of your defence in depth profile for any web applications.
  • Cloud based solutions may become more viable
  • Detection quality will improve as they better understand your applications and also the browsers capabilities
  • Detection engine improvements will be required in order to keep up with evolving threats
    • But must not impact performance!
  • Must scale with the web applications.
    • Virtualisation support is critical

What alternatives are there?

  • Secure coding the the main alternative.  This sounds imple, however…
    • History shows that this fails
      • Bad scalability
      • Much insecure legacy code
      • No control over code – software from vendors, third party code etc.
    • Some functionality may be subsumed into other technology such as ADC (Application Delivery Controller) and CDN (Content Delivery Network) – so watch these spaces.
    • NGFW (Next Generation Firewall) and NGIPS (Next Generation Intrusion Prevention System) are becoming more application aware, but do not and are unlikely to ever deliver full WAF functionality

Recommendations;

  • Determine use case;
    • Compliance – buy “anything”…
    • Security – Buy a leader with low false positives and simple management
    • Application security – buy as part of an application initiative, ensure advanced policies are supported
  • If you have ADCs – asses the capabilities of these
  • Track CDN WAF capabilities
  • Complement with comprehensive monitoring and alerting capabilities

This was a very interesting, vender neutral talk that provides a good intro to WAFs, and some useful thoughts on implementing them and possible future enhancements.  Recommended.

K