Why does the OWASP top 10 never change?

Or very rarely?

Subtext, how should we work to create more secure applications?

The OWASP top to remains largely unchanged over long periods of time.  We still see high profile breaches like Talk Talk caused by easy to protect against application attacks.  These fact imply many organisations are still failing to do ‘application security’ properly.

So why is application / development security so hard?  OR is it?

I think application security has historically had similar problems to other areas of security.  It is not seen as a business imperative, so business needs, new features, meeting client requests etc. all supersede security requirements in the priority list.

For many of us this is clearly not news.  However, given the number of incidents that still occur, it is clearly not a problem that has been solved.

For me solving this falls into 3 key areas;

  • Board level buy in for security, and specifically security throughout the development lifecycle – SDLC (Software / Systems Development Lifecycle).  This will ensure there is support for the costs and time associated with delivering a secure development programme.
  • Buy-in from the development teams.  Work with them to ensure they understand the reasons for, and benefits of secure development.
  • Making security part of the SDLC, by ensuring that the relevant tasks and processes are embedded as part of the standard lifecycle.  This will also ensure they are audited as part of that process.

I’ve spoken about board / exec by in for security before, and will do again.  There will also be another post in this series covering developer buy in as that is key to the process being successful.

The rest of this post will cover the areas that I think need to be covered in through the process in order to ensure the development process is as secure as possible.

Where these steps fall, how they are implemented, and how they are checked / monitored will vary from organisation to organisation.  They will also depend on the development processes in place such as Waterfall and Agile variants.

At a high level, these steps are;

  1. Requirements including security requirements
  2. Secure designs
  3. Threat modelling
  4. Static code analysis
  5. Peer code review – focus on quality and maintainability not just ‘pure’ security
  6. Dynamic application analysis / web application scanning
  7. Penetration testing
  8. Associated processes – found flaws are fixed at each stage of the process
  9. Clear risk assessment and ownership processes
  10. Developer Training

 

Requirements, including security requirements.

Any project must have some requirements.  These are what enable us to answer the question ‘what problem am I solving?’.  This also applies to BAU (Business As Usual) type development that may not fall under the project processes.

These requirements need to include the non-functional areas as well, of which security is one.  Security requirements should be as standardised as as much as possible to minimise the effort, and tailored as part of any project or work initiation phase.  To support this, ensure your application security team is closely embedded with the project teams, business analysts and developers.

Secure Designs

While this is more of a statement than process, I have included it to highlight that security teams need to be involved throughout the design process.  I would also recommend that security should be an approver of system and application designs.  This will ensure that meet security policies and requirements as well as meeting secure design principles.

Threat Modelling

This is the process of reviewing a system and assessing any potential threats to it, along with how to counter those threats.  The earlier in the process this can be accomplished the better as it allows the designers and developers to work with the potential threats in mind.  Although either an existing system, or at least a high level design is required to make this worthwhile.

I am personally a huge fan of threat modelling as it has multiple benefits for relatively low cost / effort;

  • More secure designs and development as the teams working on the solution will have a better understanding of the threats, and how to counter them.
  • A better understanding of the system, one of the artefacts from threat modelling is the DFD (Data Flow Diagram).  How many systems in your organisation have have detailed DFDs?  This fills that gap.
  • Under the radar security awareness training.  Without labelling it training, the security team gets to spend a decent amount of time with the design and development teams talking about threats to a system, the consequences if these threats are realised and how to protect against them.

Static Code Analysis

This is the use of a code analysis tool the performs automated scanning of the code and / or binaries for potential security weaknesses.  Good tools also provide remediation guidance for the discovered issues and a workflow for assigning the issues and documenting how it will be remediated or mitigated if your don’t already have a tool for this.

These tools will integrate with various development environments and source repositories, along with different development processes.  If you are not already using one I would recommend performing an analysis of the key players in the market to assess the best fit for your environment.

Peer code review

This is the process of ensuring different developers assess each others work.  This should include cross team assessments with a strong focus on code quality and maintainability as well as security.

Encouraging this process to be as open as possible, to enable open debate about the best way to code will definitely be beneficial.

While code quality may not be strictly a security concern in the traditional sense, the better written, commented and maintainable code is, the easier it will be to maintain the applications security over the longer term.  This is especially true as it is developed and worked on by multiple people and teams.

Dynamic Application Analysis / Web Application Scanning

Similar to the static scanning, this will be an automated process, carried out using a scanning tool.  This is the process of assessing a running application for security vulnerabilities.  These usually focus on areas of user interaction such as web pages so will not be suitable for all systems.

The need for a running system means these scans fit later in the process than the static scans, and usually fit well through the testing phase of any project / development work.

It should be noted that while these are more effort than static scanning, they should be considered especially for larger or more critical systems as they will find issues earlier and at a lower cost than pen testing, thus ensuring less issues are found later in the process.

Penetration Testing

This is the process of dedicated professionals, usually from a specialist third party attempting to ‘hack’ the solution.  This is usually the final check from a security perspective that a piece of development or an application is secure enough to go live.

Due to the costs and efforts involved it is likely that you will want to haver a clear process in place for when and how often pen tests are required.  Depending on your industry there may also be minimum regulatory requirements around how often systems are pen tested.

Associated processes

I have included this as a catch all to highlight that all of the above processes need to have agreed and working feed back loops to ensure the items they identify are assessed and remediation agreed.

Risk Assessment and Ownership process

Following the above point, and discovered vulnerabilities that cannot be fixed or mitigated will lead to a risk to the organisation.  This must be formally assessed in terms of the level of risk, and then accepted for an agreed period by an appropriately senior business owner.  The risk process is an entire topic of it’s own!

Developer Training

I was not sure whether to put this at the start or end of the list as it must be an ongoing process outside of any specific project or development work.  As well as ensuring that your developers have approbate levels of secure coding training, this should be seen as a key part of your engagement with the developers and getting their buy-in for secure development.

Depending upon the size of your organisation I would recommend a combination of on line computer based training (with a programme to increase the skill / knowledge on the courses over time and dependant on the experience of the developers), workshops and presentations from external companies, including pen testers.  For larger organisations secure development forums and ‘conferences’ could also be considered.

 

While nothing is completely secure, especially as complexity increases, if you follow these steps as part of the process you will end up with applications that are definitely not the ‘low hanging fruit’.  This will reduce your risk of any application related breach.  In addition it will mean that should you have a breach your organisation will honestly be able to say that they had done what they could to ensure secure applications, and the protection of the data you hold.

As final thoughts for today’s post, do not take these in isolation, this only covers the application part of your security, all the other key areas from network security to JML (Joiners, Movers, Leavers) processes must be in place to ensure an appropriately secure organisation.

K

 

 

 

Security Awareness Training – Worthwhile?

One of the topics that I sometimes think about is the value of security awareness training.

This tends to be a topic that many people in the security industry seem fairly passionate about, either for or against the value of it.
Vendors of software / programs such as Wombat, PhishMe, SANS etc. are all very pro user awareness training and regular programs to raise security awareness.
Conversely companies who sell products and not training are likely to strongly advise security budget is spent on tools rather than awareness training. To renforce this point at RSA Europe last year I actually asked a couple of senior RSA guys about the value of awareness training when they did a presentation around improving security and where to spend, and was told somewhat strongly that awareness training was basically a waste of time.

So the question is who is right, or do both sides have a fair point?

On the for side – how can users be expected to act securely and know how to act securely without some training? People need to learn and understand how to spot phishing emails, why it is bad to send anything non public externally without it being encrypted, why stronga and unique passwords should be used, how to spot social engineering etc. Security awareness training and campaigns can serve a dual purpose –
– Ensure users learn more about security for both their work and home IT / online lives
– Raise general awareness – a continual program of advice and varied messages keep general security and secure methods of working on peoples minds – this should not be a once a year process.
Any increase in security awareness and reduction in the attack surface that is the human user must be a good thing right?

On the against side – what is the most effective way to spend a limited security budget? Does spending budget on training offer the sam improvement in overall security as say adding a further layer to the defence in depth strategy or hiring extra dedicated IT security personel? Even with training a significant number of users will stil click the link in a phishing email or give out details they shouldn’t to a social engineer, so you still need all the other defences, both technical and personel even if an extensive security awareness program is undertaken.
– Users will always be a large security risk, so it’s best to treat them and their actions as untrusted and create a security posture accordingly.

So which side is right? I think to a large extent they both are. Depending on which report you read, something like 60-80% of all APT (Advanced Persistent Threat) attacks are initiated via social engineering – e.g. getting a user to do something for the attacker. So the most insidious attacks that are very difficult to detect and currently being used by the security industry as the driver for selling new security tools tend to start with the user. Then surely reducing the chances someone will succumb to social engineering much be a good thing? Yes you’ll never get to 100%, but then no actual security device ever detects or prevents 100% of attacks. So why do security tool vendors not like awareness training? Likely money and profits.

A balanced approach is key, understand the environment and threat landscape your company operates in and create a holistic security program encompassing the necessary tools, skilled security personal and user awareness training.

So, how can awareness training be made as effective as possible? Along with mixed and continuous messages and taking the time to make security part of the culture, the key thing is to get the message to people and make them want to take it on board. I think there are two components to make this successful;
– Fear – not with lies or exaggeration, but highlight real stories, as especially stories that people will relate to so think Playstation and Bank / online shopping hacks.
– Make it relevant – Link the secure ways of working to peoples home lives so highlight how they can be secure online, not fall for scams, use social sites as safely as possible, shop safely etc.

To conclude my opinion is that security awareness training does add real value and should be part of any security program. It does not however replace in anyway the need for a strong defence in depth strategy aligned to your business and threat landscape. What do you think?

K

RSA’s First UK Data Security Summit – part 2: Verizon Data Breach Report 2013

The Verizon Data Breach Report 2013 was publicly released on Tuesday (23rd April).  We were given a world preview and initial review, with the headline of critical findings for business, as one of the key talks during the RSA UK Data Security Summit.

The report can be downloaded from here;

http://www.verizonenterprise.com/DBIR/2013/

How as an organisation can we better understand our threat landscape?

Who gets attacked?Everyone – no one is immune;

  • Finance companies account for 34% of attacks
  • Attacks occur across all verticals and all business sizes
  • We are subject to continuous, non stop attacks
  • 19% of all attacks investigated appear to be state sponsored espionage – this also impacts companies of all sizes!

Who are the attackers?

  • Activists – maximise disruption / cause embarrassment etc.
    • basic, opportunistic, sheer numbers
  • Criminals – financial gain – PII, card details, proprietary business data
    • More calculated and complex, but still often opportunistic, trade information for cash
  • Spies – get exactly what they want – will stick at it until they get what they want much more than the first two.
    • most sophisticated tools (often), most targeted attacks, relentless

What to worry about (what are the trends)?

  • Same as last years
  • 75% breaches – financial motives
  • 95% of espionage used phishing!
  • Don’t ignore well established threats

What do they target (assets)?

  • Desktops 25%, file servers 22%, laptops 22%
  • Unapproved hardware accounts for 43% of misuse cases
  • BYOD / consumerisation has had little impact on the figures so far (maybe due to report being US centric?)

Many data breaches have unintentional element – many attacks focus on perhaps less trained / savvy staff – 46% originated through call centre staff

 69% of breaches spotted by third party (9% were customers)

  • most breaches still not spotted by breached company despite all the log data etc in the company.

Minimal time to attack – 84% of cases attack to compromised data took hours or less.  ~20% took minutes or less!

  • How quickly can you react, how quickly can you find the breach?
  • 66% of cases breaches took months or even years to be discovered!  How much data could be stolen in this time, what could they find out, what would the repetitional damage be?

Most organisations are a target because of what they do;

  • What do you do, and who wants you data?
  • Investigate profiling threat actors.

Recommendations

  • Make security company wide
  • Create better, faster detection – people, process, technology
  • Don’t underestimate tenacity
  • Understand threat landscape

 Security awareness training is still key!

So overall despite the evolving threat landscape, in many ways little has changed..  However. this report is definitely worth a read, and the inclusion of state actors in addition to criminals and activists / hactivists keeps it relevant and an line with reality.

K

Using passwords for authentication

Recently when researching form my Masters project I came across some studies about users and password use.  I think we now know that passwords should be dead and replaced / augmented by something better such as two factor authentication using token or biometrics.  However many systems still rely on usernames and passwords.

In terms of business, in order to improve security many companies now add two-factor authentication when logging in remotely so the user enters their username, some sort of pin or password and a value from a hardware or software token.  This helps with the issues around passwords when remotely logging into systems such as when working from home, it does nothing to improve the security of logging in with just a username / password in the office.

The traditional assumption has been that it is OK to use just username / password when logging in from a more secure location such as the office when you are already connected to the trusted network.  Assuming your business uses modern operating systems that employ salted hashes for any password storage or transmission the issue it not with someone malicious managing to ‘sniff’ the password while it is in transit, or getting hold of the password store.  However what of the users who use the same password for multiple systems?  If your users log into insecure web sites using the same or very similar passwords to those they use to log into the secure business systems?

Studies have shown that nearly all users re-use passwords.

In addition users will tend to use the least complex, easiest to remember password possible – so while your businesses chosen level of complexity may have a password space of xxxxx passwords, the users passwords may actually tend to occupy a much smaller space, or be easy to guess despite meeting the password complexity requirements.

People will also tend to write down passwords that are too difficult to remember easily.

So I’d strongly recommend moving away from just relying on passwords and utilize some form of multi-factor authentication even within the office environment.  This is not as difficult as it may sound – most (all?) modern operating systems support multi-factor authentication out of the box.

If you cannot move away from just relying on passwords then a use education program is a must.  A good password is not just a complex one, it must combine complexity and being difficult to crack with also being easy to remember for the user.  If users can understand both the password policy and the rational for that, along with ways to come up with strong passwords that are easy for them to remember this will lead to a more secure environment.

Interestingly, we again come around to user education and training being a key component of a defense in depth security strategy.

K

Phishing; what is phishing and how to protect against it.

Phishing continues to be one of the key attack vectors against both individuals and corporations.

At a personal level it’s one of the most successful ways malicious individuals and groups have for stealing credit card details and identities.

At a corporate level it is one of the most if not the most common entry points into an organisation.  This is true even for the majority of the Advanced Persistent Threat type attacks that are discovered; while they may use many clever techniques to avoid detection once they are established the usual entry point is via some form of social engineering with Phishing being the most common social engineering attack.

It is due to this that I was recently asked to create a brief overview of Phishing covering what it is, why it is so prevalent, and what can be done to reduce the risk.  I’m sure most of you are aware what Phishing is, but I thought I would share some of the content of my recent presentation.

I started with a brief overview of what Phishing is;

•Phishing is a fraudulent attempt, usually made through email, to steal your personal information. The best way to protect yourself from phishing is to learn how to recognize a phish.

•Phishing emails usually appear to come from a well-known organization and ask for your personal information — such as credit card number, social security number, account number or password. Often times phishing attempts appear to come from sites, services and companies with which you do not even have an account.

•In order for Internet criminals to successfully “phish” your personal information, they must get you to go from an email to a website. Phishing emails will almost always tell you to click a link that takes you to a site where your personal information is requested. Legitimate organizations would never request this information of you via email.

Wikipedia has a longer version providing an overview of Phishing;

http://en.wikipedia.org/wiki/Phishing

This is actually a pretty good article covering a brief history of Phishing, various Phishing techniques, and some prevention / anti-Phishing tools and techniques.

I then went onto cover some further terminology around different types or developments of Phishing that have dramatically improved its effectiveness;

Phishing began as very generic, spam like emails.  These have over time become much more realistic and targeted in order to improve the chances of success for the attacker.  Various terms have been coined to describe these more targeted attacks;

•Spear Phishing refers to attacks targeted at specific individuals or groups of individuals such as employees of a company.  Attackers will gather personal and / or company specific information in order to improve their chances of success.

•Clone Phishing is where a legitimate email that contains attachments or links is cloned / copied, but with malicious attachments or links.  This exploits the trust that may be inferred from the email coming from a seemingly legitimate source.

•Whaling is a term for phishing attacks specifically targeting only very senior company executives.

•A further term recently coined in a blog post by Bruce Schneier was ‘laser guided precision phishing’ when describing some recent advanced phishing attacks.  The clear message is that these are getting better and harder to spot all the time, and these attacks are seldom stopped by technical means;

–“Only amateurs attack machines; professionals target people”

Basically Phishing continues to evolve with attackers spending time to do recognisance on higher value targets to make the attacks look as realistic as possible in order to increase their success rate.

The final part of the presentation covered some of the methods that can be employed to reduce the risk from Phishing attacks;

•Security / Phishing awareness and training.

–Phishme (or similar service) – this has a great success rate with figures such as 60% of users clicking on Phishme email links reducing to <10% after a few cycles.

–Broader training – regular communications from our department around security awareness and things to look out for.

•Make emails from external sources more obvious, such as by changing the display name on internal emails.

–This helps improve vigilance, however so many emails are received from external sources the benefit it likely limited.

•Disable links and attachments in emails from external sources

–Likely impacts many business processes, is a white list of all ‘trusted’ email sources feasible or maintainable?

•Ensure any heuristic and zero day type protections are functioning as designed to provide maximum protection from bespoke and new attacks.

•Enforce ‘least privilege’ – no users log onto any machine with administrative or root privileges, always use ‘Run As’ or Sudo for any actions requiring elevated privileges

•Ensure any browsers in use are kept up to date with any anti-phishing add ins / tool bars installed and functioning

•Black / White listing of acceptable sending domains.  White listing is more cumbersome, but more effective, black listing is easier (as with most security technologies) but less effective as it can only block known bad sites / domains.  Neither of these techniques will stop spoofed emails or emails from compromised ‘good’ sites / domains.

•Become involved with organisations / forums such as the Anti Phishing Working Group; http://www.antiphishing.org/

In conclusion I would wholly recommend a solid defence in depth strategy for your organisation when it comes to security tools and strategy, but I would also say that user training is a key component of reducing the risk from Phishing; if not the most critical component.

A great way to learn more, and help improve anti-phishing techniques is to get involved with organisations such as the Anti Phishing Working Group (link above).  They also offer some useful anti-phishing training.

It would be great to hear your thoughts on Phishing, and the user training vs. technical controls debate.

K

RSA Conference Europe 2012 – They’re inside… Now what?

Eddie Schwartz – CISO, RSA and Uri Rivner – Head of cyber strategy, Biocatch

Talk started with some discussion around general Trojan attacks against companies, rather than long term high tech APTs, with the tagline; If these are random attacks.. We’re screwed!

Worth checking the pitch, but there was a series of examples from the RSA lab in Israel of usernames and passwords and other data that Trojans had sent to C&C servers in Russia.  These included banks, space agencies, science agencies, nuclear material handling companies etc.

So what to the controllers of these Trojans do with the data?  Remember these are random attacks collecting whatever personal data they can get, not specific targeted attacks.  A common example is to sell the data, you can find examples of the criminals on message boards etc. offering banking, government and military credentials for sale.

Moving onto examples of specifically targeted attacks and APTs..  Examples of targeted attacks include; Ghostnet, Aurora, Night Dragon, Nitro and Shady RAT.  These have attacked everything from large private companies, to critical infrastructures to the UN.  All of the given examples had one thing in common – Social Engineering.  Every one used Spear Phishing as their entry vector.

From this I think you need to consider – Do you still think security awareness training shouldn’t be high on your organisations to-do list?

The talk went onto discuss Stuxnet and Duqu, along with their similarities and differences, largely what was captured in my last post.  The interesting observation here was their likely different plaes in the attack process.  Stuxnet was at the end and the actual attack, Duqu likely much earlier in the process as it was primarily for information gathering.

A whole lot more targeted malware examples were given including Jimmy, Munch, Snack, Headache etc.  Feel free to look these up if you want to do some further research.

A very recent example of a targeted attach that was only discovered in July of this year is VOHO.  This campaign was heavily targeted on Geopolitical and defence targets in Boston, Washington and New York.  It was a multistage campaign heavily reliant on Javascript.  While focused on specific target types the attack was very broad, hitting over 32000 unique hosts and successfully infections nearly 4000.  This is actually a very good success rate, with the campaign no doubt considered a success by those instigating it..

In light of this evidence it is clear we need a new security doctrine.  You will get hacked despite your hard work, if it has not yet happened, it will..  Learn from the event, an honest evaluation of faults and gaps should result in implements.

Things to consider as part of this new doctrine;

–          Resist – Threat resistant virtualisation, Zero day defences

–          Detect – Malware traces, Big data analytics, behavioural profiling

–          Investigate – Threat analysis, Forensics and reverse engineering

–          Cyber Intelligence – Threat and Adversary intelligence

Cyber Intelligence was covered in some more specific details around how we can improve this;

–          External visibility – Industry / sector working groups, Government, trusted friends and colleges, vendor intelligence;

  • Can this information be quickly accessed?  For speed should be in machine readable format, but use whatever works!

–          Internal visibility – Do you have visibility in every place it it needed, HTTP, email, DNS, sensitive data etc.

  • Do you have the tools in place to make use of and analyse all of these disparate data sources

–          Can you identify when commands like NET.. and schedulers etc. are being used?

–          Do you have visibility of data exfiltration, scripts running, PowerShell, WMIC (Windows Management Instrumentation Command-line) etc?

–          Do you have the long term log management and correlation in place to put all the pieces of these attacks together?

Summary recommendations and call to action..

–          Assume you are breached on a daily basis and focus on adversaries, TTPs and their targets

–          Develop architecture and tools for internal and external intelligence for real-time and post-facto visibility into threats

–          Understand current state of malware, attack trends, scenarios, and communications

–          Adjust security team skills and incident management work flow

–          Learn from this and repeat the cycle..

Next steps (call to action!);

–          Evaluate your defence posture against APTs, and take the advice from the rest of this post

–          Evaluate your exposure to random intrusions (e.g. data stealing Trojans), and take the advice from the rest of this post

Useful presentation from a technical and security team standpoint, but completely missed the human and security awareness training aspect – despite highlighting that all the example APTs used spear phishing to get in the door.  I’d recommend following all the advice of this talk and then adding a solid security awareness program for all employees and really embedding this into the company philosophy / culture.

K