Turning Security into a Profit Centre

Security is still seen very much as a cost centre or necessary evil that is a ‘cost of doing business’.  This, along with a historical challenge around gaining traction at board level has driven the slow move of security to being a key part of most businesses.

This is true even in the industries where security is now seen as critical, and where the board has time and an appreciation for it.  In these industries such as financial services, gambling, big pharma and even gaming, getting funding, resourcing and executive support for security programs is less of a challenge.  However even with this support, the view is still one of security being a cost of doing business.

In order to progress further and make security genuinely a key part of the business, we need to move the conversation on again.

Over the last few years CISOs and security teams have worked diligently to understand their business and speak in the language of their business peers.  This has been a key factor in gaining board support for security.

The next step is to move this further, and work out how security can become a key pat of the business offering for your organisation.

This should likely begin in terms of how you differentiate yourself in the market.  Start thinking along the lines of;

  • How is your companies security different or better than others in the market?
  • How do you ensure your customers data is kept secure?
  • What reassurance can you offer your customers?
  • If your business involves medium to longer term partnerships can you become your customers ‘trusted partner’?
  • Do you have an impeccable record e.g. never been breached, never lost customer data etc.?

The aim here is to think about ways you can make your strong security a part of how your organisation sells itself.  Security needs to become a part of ‘who the organisation is’.  By doing this you’ll move security to the ‘next level’ in the business where is isn’t just a boardroom topic because it has to be, but it is a boardroom topic as a key part of what you do.

By making security a key differentiator for your business, you’ll also make security much more part of the conversation across the business as it becomes part of how your organisation sells itself.

 

Now for the really big bet! Can we move security even further, to not just be a differentiator, but to become something you actually sell?

Whether this is possible or not will depend on your industry, company size, customer base etc.  However if it is possible, think of how powerful this could be!

Imagine not only the benefit to the standing of the security team if you are able to actually sell services and solutions to your customers, but also the benefit to the actual security / risk posture of your organisation!

Have a think;

  • Do you hold large volumes of data on your customers, or their customers?  Could this be used to provide valuable security analytics such as fraud or unusual behaviours?  Could it even be used to provide predictive analytics?
  • Do you run enterprise scale services that you could provide at a relatively low incremental cost to your customers such as encryption, tokenisation, authentication,  …?
  • Could you support your customers in achieving compliance with whatever regulatory environment you work in?
  • Is it possible to securely host your customers services in your own DCs?  This has the added benefit on ensuring communications from the customers systems to yours are secure.
  • Can you provide them other capabilities such as monitoring, vulnerability scanning / management, secure coding guidance …
  • Insert your ideas here!

Seriously, if you work in security, and especially if you have a leadership role think about this.  It’s time for a step change to really make security front and centre of your organisation.  Lets stop being one of the ‘costs of doing business’ and become a core part of what our organisation does!

K

Secure Mobile Applications, part 3 – Bringing it all together!

Hopefully it is fairly obvious from the last couple of posts how I think a mobile application can be made ‘secure enough’ to replace hardware security devices and enable many other capabilities from mobiles / tablets etc.  However I thought it may be useful to provide an overview of how the detailed components will work together to provide this capability.

Many organisations such as banks have or are already launching payment applications that enable you to make payments with your phone rather then needing your bank card, and of course there are Apple Pay and Samsung Pay etc.

So it’s clear people are becoming comfortable with mobile devices for some use cases, sometimes purely software, sometimes with hardware components involved such as Knox or TEE (Trusted Execution Environment).  This is likely helped by the rise of ‘contactless’ payments in many parts of the world.

While hardware components and secure operating system components can form part of a secure mobile application solution, they  are by no means a silver bullet.  As you still need some part of the application to run in normal, untrusted space, you still face the same problems as if there were no hardware solution in place.  What is to stop a malicious application attempting to man-in-the-middle the communications between the secure and insecure environment?  Indeed what is to stop a malicious application from just impersonating the secure component to the insecure one?

Hardware based solutions also face challenges around support and different capabilities on different devices.

This is why I have focussed on a software only proposal.

If we get to the point where we can trust and monitor a software only solution, this opens up so many possibilities – as long as you are on a supported O/S version, you can run our secure application(s) on any device, anywhere.

While we have the above mentioned payment applications, there are much wider use cases when we get to the point that we really do trust the mobile application I mentioned some of these in my original post on this topic.

As a recap, these were;

  • Become your payment instrument.  Not like Apple pay that still uses your card in the background, but actually being your card(s).
  • This can also provide a much richer user experience such as alerting the user every time there is a transaction on the ‘card’
  • Take payments in stores without the need for a physical card payment solution.
    • EMV (chip and pin) becomes EMV mobile devices and PIN / other
  • Replace your drivers license / passport / age card etc. as a valid form of ID.
  • Enable secure signing of legal / contractual documents.
  • Combine with technology like RFID and GPS etc. to revolutionise the retail experience.
  • ‘Card not present’ becomes ‘card present’ (the end of ‘Card not present’ fraud!)
  • Secure mobile banking becomes actually secure and fully featured
  • Support (or deny) any disputed transactions by providing more detailed information about the device, location and users involved
  • Become your mobile medical record – no longer do doctors or hospitals have to look up your records (or not find them), you carry a copy with you, that syncs from the central repository when it is updated

I am sure you can think of many others!

So how do the components previously detailed components all come together to proved a secure, monitored environment?

In ‘real time’ there are 5 main components;

  • The mobile app
  • Secure decision point
  • Real time risk engine
  • Authentication
  • Monitoring

 

The mobile application – this comprises all of the security components deployed to the mobile device, along with the actual application capabilities of course!  These components are the key to understanding the security status of the device.  They also providing details of behaviour, from things like location to the users activity, and authentication information.  These components have the responsibility for securing and monitoring the device and user behaviour, plus ensuring this data and telemetry is securely provided to the secure decision point and monitoring services.

The secure decision point is to provide a central (resilient of course!) control point for all application traffic to pass through.  This enables relevant data to be passed to the correct components such as the risk engine and monitoring solution(s).  In addition this provides an added layer of protection for the back end application services.  Any time the application or user behaviour is deemed unacceptable, the connection can be blocked before it even reaches the back end services.

Real time risk engine enables risk based decisions to be made based on the information from the other security components.  The secure decision point, authentication solution and ‘external’ source like threat intel and the big data platform all feed the risk engine.  This can be applied to  many activities including authentication, user behaviours, and transactions.

Authentication does what the name implies – it authenticates the user, and likely to at least some extent device.  The difference between this an ‘traditional’ authentication is that as well as authenticating at logon, and supporting multiple factors and types of authentication, is that it can authenticate constantly in real time.  Every time the application is used, information about the device, location, user behaviour etc. is passed to the authentication solution, enabling authentication decisions to be made for any application activity.  In addition to providing rich risk information for the risk engine this also enables fully authenticated transactions.

Monitoring, refers in this case to security monitoring of the system components and their data.  This provides expert analysis and alerting capabilities to augment the automated processes of the risk engine, authentication solution and security decision point.  This may be internal staff, a dedicated SoC (Security Operations Centre),  or a dedicate mobile security monitoring centre, or a combination of multiple options.

 

As you can see, all these components combine to provide an understood and secure environment on the mobile device, backed up by real time monitoring, risk based decisions and authenticated activities.

These ‘real time’ components are further backed up by external feed from intelligence sources, and by analytics performed in the big data platform.  This enables learning from the behaviour of users and devices in the environment so that the risk based rules and manual alerting can be refined based on previous and current activities and outcomes.

Depending on a combination of the security requirements for your application, and the resources available, you may not need or want to implement every component here.  Overall the detailed environment provides a software only solution that is capable of providing enough security to enable pretty much any activity.  I’d love to hear your thoughts, and any experiences of deploying and proving secure mobile applications!

 

K

Secure Mobile Applications

Subtext, can a mobile application be ‘secure enough’ to replace single purpose hardware devices?

An area I have been discussing for some time is whether we can make a mobile application secure enough that it can be trusted to replace physical devices / items.

If we can achieve this, there are many possibilities for your phone / tablet enabling it to;

  • Become your payment instrument.  Not like Apple pay that still uses your card in the background, but actually being your card(s).
    • This can also provide a much richer user experience such as alerting the user every time there is a transaction on the ‘card’
  • Take payments in stores without the need for a physical card payment solution.
    •   EMV (chip and pin) becomes EMV mobile devices and PIN / other
  • Replace your drivers license / passport / age card etc. as a valid form of ID.
  • Enable secure signing of legal / contractual documents.
  • Combine with technology like RFID and GPS etc. to revolutionise the retail experience.
  • ‘Card not present’ becomes ‘card present’
  • Secure mobile banking becomes actually secure and fully featured
  • Support (or deny) any disputed transactions by providing more detailed information about the device, location and users involved
  • Become your mobile medical record – no longer do doctors or hospitals have to look up your records (or not find them), you carry a copy with you, that syncs from the central repository when it is updated

 

The question is can we?

My take on this is yes.  But with some caveats around how, and what we need to do to ensure the safety of the data used by the application.

The great news for me is that other people are finally starting to get on board with this idea, after a mere 18 months or so it seemed like an opportune time to write in some more detail about my thoughts!

Before we start this discussion we need to adjust the mind-set from

  • thinking about a supposedly secure device that we do little to monitor

to

  • thinking in terms of real time application and behaviour monitoring to provide assurance of the application and device security, along with the user identity and behaviour.

 

For me the ‘assumed secure hardware’ stance seems terrifically old fashioned when compared to a solution where we can monitor and understand the risk profile continuously

Now we are thinking in these more current terms, just how do we go about making a mobile application as secure as a dedicated hardware device?  Indeed, when you consider the more intelligent monitoring and risk assessments we can perform in real time I would position this software solution as considerably better than the existing hardware options.

 

For me the ecosystem for a secure mobile application would comprise of the following components;

Mobile app security concept - New Page (1).jpeg

To avoid this becoming a mammoth post, I’ll cover some of the key capabilities of this system here, and provide details of each component in part 2 of this

Some of the key capabilities these components will provide include;

  • Real time monitoring
    • Data sent to and from app in real time
    • Automated blocking and alerting
    • 24*7 ‘eyes on glass’ monitoring
  • Behavioural monitoring
    • Device
    • User
    • Application
  • Application monitoring
    • Is it the correct application (e.g. checksum)
    • Is it behaving as expected
    • ‘trap’ code in the application that is only accessed of changed if there is an issue
  • Rooting / Jailbreak detection
    • Auto updates to detect new methods or ways of hiding
    • Can alert monitoring and user if detected
  • Malware detection / device interrogation
    • Device ID, software versions etc.
    • Automatically updating detection capabilities
  • User alerting
    • Alerts user if there are any issues detected
    • Alerts user of activity on their account
  • Behaviour blocking
    • Can block some or all in app activity based on the current risk profile
  • Secure communications
    • between app / mobile device and back end
    • frequently changed keys
    • key management and distribution
  • Encryption
    • White box
    • hardware
    • In field
    • In app
  • Bot vs. real user detection
    • detects bot like behaviour
    • detects remote control behaviour
    • build picture of user normal behaviour
  • Real time risk scoring of activity / transactions
    • collection of multiple data points
    • real time risk scoring, decisioning and blocking of transactions and behaviours
  • Multiple authentication methods and step up authentication
    • Policy based
    • Risk based
    • FIDO compatible
  • GEO location
    • Current location
    • Historical locations linked with behaviours
  • Fraud detection
    • Components can detect potentially fraudulent activities such as the amount entered into a field, not matching the amount sent to the back end
  • Trending and predictive analytics
    • Big data platform can provide analytics capabilities and long term trending
    • Machine learning and predictive analytics can guide security enhancements
    • May also become a saleable service for your business

This is by no means an exhaustive list, my intention is to get people thinking about the possibilities for secure mobile applications.  Hopefully this post has got you thinking about how we can secure and monitor our applications on any device, anywhere.  This really will open up a whole new world of possible capabilities for mobile devices especially in the worlds of business and consumers / businesses transacting.

Part 2 will follow in the next few days providing some more details around the building blocks in this ecosystem.

K

Developer engagement..

Following my recent posts covering application security and patching, another recent hot topic of conversation has been around developer engagement.  Specifically how to ensure developers are fully engaged in the secure development process.

Much like many current patching processes an issue with a lot of secure development programs is that they are still very ‘push’ focussed.

This approach can be successful, especially for organisations with less security maturity.  I have personally seen great uptake in the use of secure development tools and processes through my current teams work.  However while taking the ‘push’ approach can dramatically improve your application security processes, it does have limitations.

These limitations include;

  • It is very resource intensive for the Security Team – they constantly have to ‘push’ to get developers using the tools and processes and on-boarding new applications and developers etc.
  • It can lead to a culture of secure development being the responsibility of the security team rather than the development teams – a culture of taking rather than owning security
  • Things, including entire applications can be missed.  The security team can struggle to know every application and development project that is in the environment if the onus is on them to push security to every application.  This is especially true in more complex, multinational organisations
  • This focus can also lead to dissatisfaction and higher churn in your security team as you will have skilled application security professionals spending large amounts of their time on-boarding developers and applications, and chasing development teams to understand what is being worked on

So, how can we fix this and ensure the secure development practices from training through threat modelling and code reviews are embedded into the development process?

If you read my post on patching, my key thought will not surprise you!  Developers and development teams need to have responsibility for secure development as one of their key objectives.

This is a fundamental shift in culture and responsibilities.  In order to be successful you’ll need to work to drive support from executives as well as the development teams and their managers.  However if you can enable this culture shift, you will have made a huge difference to the uptake and success of your application security programme.

This shift will change the responsibility for ensuring developers are enrolled on the training platform and added to the code analysis tooling into the development teams.  It will also drive the use of the security tooling as one of their key objectives.  There are two key benefits of this approach;

  • Coverage; The development teams and their management will be bought into the benefits and requirements for security.  This will drive the use of security tools and processes for all in scope developers and applications.
  • Enterprise security can focus on security.  The skilled application security specialists in your team will now be able to focus much more on working with the development teams to support secure design and coding.  The secondary benefit of this is that your team will be more engaged doing the work they want to rather than chasing developers to on-board them and their applications!

The second thing I would recommend to support this approach is the creation of ‘security champions’ within the various development teams.  This would likely not be a distinct role, but rather existing developers who ideally have an abiding interest in security.

These roles will have the responsibility of having a strong understand of the required security processes and tooling, along with fostering a close working relationship with the application security team in enterprise security.  These roles could also help with the development of secure coding guidelines and ensuring the application security team understand the way the developers work and challenges they face.

To support the role and working relationship, security champions having ‘dotted line’ reporting to the application security director (or equivalent) should be considered.

The final piece of the puzzle is to ensure your application security team understands the various development processes in use across your organisation.  While the same level of engagement, similar tooling and processes will be used regardless, how they are applied is likely to vary considerably across different development and support styles.

To be successfully integrated the application security team must consider how to best integrate with Waterfall, the various ‘flavours’ of Agile, DevOps etc.  While you do not want a million different processes you’ll likely need a few to cope with the different development and project processes.

If it is not yet in place for your organisation, a good starting point for the SDL (Secure Development Lifecycle) and the steps it should contain is the Microsoft SDL

https://www.microsoft.com/en-us/sdl/

To help you consider variations of your process for different methodologies they also have an agile version;

https://www.microsoft.com/en-us/SDL/Discover/sdlagile.aspx

Let me know what you think of these ideas, and how you get on with implementing them on your organisation, if you are not doing so already!

K

2016 Resolutions. The detail..

As promised, this follow up post will outline what I mean by each of the ‘resolutions’ I highlighted.

These were;

  1. Patch.  Everything.  On time.
  2. Protect your hosts.  Do application whitelisting.
  3. No admin rights for anyone who can access production data.
    1. No one with admin rights can access data.
  4. Role Based Access.
  5. Segregate your networks.
  6. If you create code, do solid code assurance.
  7. Test and Audit.

 

1. Patch.  Everything.  On time.

Sounds simple right?  It should be, but it seems it isn’t in many companies.  From my experience there seem to be 2 main drivers for so many companies failing this most basic of maintenance tasks;

  • Systems that must have almost 100% uptime, with no, or ill defined patching windows and processes.  This goes hand in hand with these solutions being incorrectly designed, if a system must always be ‘up’, design it in such a way that components can be taken out of service to be patched and maintained (or indeed if they fail).
  • Incorrect ownership and drivers for the patching process.  In many organisations it seems to be the security team who drive the need to apply ‘security’ patches.  This needs to be turned around.  Any system in production must be patched and maintained as part of BAU.  Systems / solutions should never be handed over into production without clear ownership and agreed processes for maintaining them, this must include patching.  Security then become an assurance function for this and their scans / checks confirm that the process is being correctly followed, plus of course highlighting any gaps.

If you see these issues in your organisation, make 2016 the year you address them, don’t be the next business in the headlines that is hacked due to systems that have not been patched for months!

2. Protect your hosts. Do application whitelisting.

With the ever more porous nature of our networks and perimeters, coupled with the insider threat and phishing etc. protecting our hosts is becoming ever more critical.

AV (Anti Virus / Malware) is not dead, but it also clearly is not enough on it’s own.  Indeed you will struggle to find any host protection product that only does AV these days.  Ensure all your hosts, both servers and user end points are running a solid, up to date and centrally managed host protection solution (or solutions) providing anti malware, host IPS (Intrusion Prevention System), host fire-walling and ideally FIM (File Integrity Monitoring).

I’m gradually trying to change peoples language from AV / Anti Malware to Host Protection as I think this covers both the requirement, and many of the solutions far better.

In addition to this I would strongly recommend the use of an application whitelisting solution, as this can provide a key defence in preventing any unapproved (or malicious) software from running.  As well as preventing malware, these solutions have the added benefit of helping to maintain a known environment, running only known and approved software.

3. No admin rights for anyone who can access production data.  No one with admin rights can access data.

This is something I am currently championing as a great way to reduce the risk to your organisations data.

This may be harder for very small organisations, but for medium and larger ones, think about the different roles your teams have.

How many people who need to access key data, e.g. via production applications, need to have administrative rights on their end user systems, or on the production systems?

Conversely, how many of the system administrators who maintain systems and databases etc. need access to the actual production data in order to perform their duties?

One of the most common ways malware gets a hold is via users with administrative privileges.  So if we prevent any user with these elevated privileges from having access to data, if they or their systems are compromised, the risk of data loss or of damage to data integrity is massively reduced.

While it may seem a substantial challenge to prevent administrators from having access to data, there are at least a couple of obvious options.

Some host protection solutions claim to have separation of duties capabilities that control who can access data outside of just relying on O/S (Operating System) permissions.  I have not tested these though.

Various companies offer transparent encryption solutions that have their own set of ACLs managed independently from the O/S permissions.  These can be managed by for example the security team to ensure only approved business users can access data, while still permitting administrators to perform their role.

4. Role Based Access.

This one should hopefully require minimal explanation.  Each type of user should have a defined role.  This should have associated system permissions allowing them to access data and perform the tasks required to perform their role.

This ensures people should only be able to access data they are supposed to, and not data they should not.  The principle of ‘least privilege’ must be adhered to when creating roles and applying permissions to ensure everyone can perform their duties, but not carry out tasks outside of those that are approved.

This can be backed up by using some form of IAM (Identity and Access Management) solution.  Although be careful about over complicating this if your organisation is not large enough and complex enough to warrant a cumbersome IAM solution.

5. Segregate your networks.

In addition to external firewalls preventing access from outside your organisation, internal networks must be segregated as well.

When designing your networks, think carefully about which systems need to to talk to each other, and on which ports.

For example, do your end user systems all need to access all of the production environments?  Or do some of your teams need access to some production systems and only on specific application ports?

This point can be linked with the host protection one above as host firewalls can be used to further prevent unauthorised access to systems.  Most servers do not need to connect to all other servers in the same zone as them.  Host firewalls can be used to restrict servers from connecting to other servers they do not need to, without requiring an overly complex network design.

Strong network and system segregation will help prevent the spread of any malware or malicious users within the organisations’ IT estate, and thus help ensure data is not removed or changed.

6. If you create code, do solid code assurance.

The OWASP top 10 has changed little for several years (look it up if you are not familiar).  Applications consistently have known and well understood vulnerabilities.  These same vulnerabilities are consistently exploited by malicious people.

If you create applications ensure the code goes through rigorous manual and automated code reviews.  Ensure the application is thoroughly tested against not just the businesses functional requirements, but also the non functional requirements from the security team.

Finally, before the application or substantial change goes live ensure it is penetration / security tested by experts.

Performing all these checks does not guarantee your application cannot be hacked, but it will ensure that it is not an easy target.  Ideally these steps should be key and non negotiable parts of your organisations SDLC (Software / System Development Life Cycle).

7. Test and Audit.

Once you have the basics in place, you need to ensure they are being successfully applied.  This is where the assurance part of the security teams role comes into play.  Whether it is supporting the SDLC processes or scanning systems for outstanding patches, the security team can, and must, assure that the agreed tasks and processes are being adhered to.

This step is critical to the ongoing success of the previous items, the effort and expertise required to complete it should not be under estimated.

 

Hopefully this has supplied some clarity and context to my previous post and made my intent clear.  Let me know.

In some following posts I’ll start talking about some of the really fun and intelligent things you can start doing once the basics are in place!

K

2016 Security Resolutions

It’s that time of year again, everyone will be writing their resolutions and predictions for the year.

Will we have more of the same?  More APTs?  More nation state sponsored breaches?  DDoS?  Increased application attacks?  More mobile malware?

Probably.

We all know there will be hackers, criminals, hactivists, malicious insiders, nation state actors etc.  We also all know there will be application attacks, malware, APTs, DDoS etc.

Rather than write another predictions article I thought I’d try a slightly different tack an cover the key things I think every organisation MUST do if they are not already.

  1. Patch.  Everything.  On time.
  2. Protect your hosts.  Do application whitelisting.
  3. No admin rights for anyone who can access production data.
    1. No one with admin rights can access data.
  4. Role Based Access.
  5. Segregate your networks.
  6. If you create code, do solid code assurance.
  7. Test and Audit.

Get the basics right!  There are of course many other things to focus on, but hopefully the general idea is clear.  Organisations need to be mindful of throwing too much time and money into the latest and greatest APT protection, behavioural analysis, and overcomplicated solutions to simple problems.  Getting the basics right must be the first priority.

Remember, it is extremely likely that attackers will go after the low hanging fruit.  Even if they are  directly targeting your organisation, it is un-patched systems, people with admin rights and unprotected hosts or applications that will be attacked first.  Only after these avenues have failed will they resort to more challenging and advanced attacks.

I’ll use a follow up post to cover the above point in more detail, but wanted to get these initial thoughts up.

What do you think?  How is your organisation doing with the basics?  Do you spend too much time on new, sexy security when you don’t have the basics covered?

Happy new year all!

K

 

 

The blessing and curse of PCI-DSS

This is a post I have been meaning to write for some while, as I have been pondering the benefits vs. challenges of various standards / legislation.  I’m not thinking about challenges of implementing PCI-DSS (Payment Card Industry – Digital Security Standard), more the challenges of working in environments where compliance trumps security.  As per the title, this post will focus on PCI-DSS, but I think it’s likely most of the issues will apply to various standards / regulations that are subject to compliance audits of some sort.

On the positive (blessing) side PCI-DSS is mostly a good standard, enforcing things like encryption in transit over public networks, separation of duties, minimising access to card data etc.  It has forced some level of security practice onto companies that may previously have had relatively lax controls in place.  The standard has also considerably raised the profile of security / meeting security requirements within many organisations.

On the negative (curse) side PCI-DSS is seen by many organisations as the be all and end all of security, despite the fact that is it the bare minimum you have to achieve in order to be permitted to handle / process card date.  In addition it focuses almost solely on card data, ignoring concerns around things like PII (Personally Identifiable Information).  This leads to a focus on ‘box-ticking’ compliance, rather than designing secure systems from the ground up which would by definition be compliant with most (any?) sensible standards.

With the movement towards a more continuos monitoring style proposed for the latest release of PCI-DSS the focus on obtaining compliance yearly may be something we are moving away from.  However this will do little to address companies attitudes towards broader security and the belief that obtaining and maintaining PCI-DSS compliance means systems are completely secure.

On balance I think standards / regulations like PCI-DSS are a good thing as they force companies to at least achieve some minimal levels of security.  The challenge for security professionals is to get project teams and the wider business to accept that these standards are the bare minimums.  Considerably more secure designs / solutions need to be implemented if we want to actually meet our duty of care to our customers whose data we hold and process.

What are your thoughts?

How successful have you been in moving to security being ‘front and centre’ and compliance with regulations being a by product of this, rather than the focus being on compliance rather than security?

K

 

 

ISF congress 2013 Post 1: Defence evasion modelling – Fault correlation and bypassing the cyber kill chain

Well I am at the ISF (Information Security Forum) annual congress for the next couple of days.  As usual I’ll blog notes and some comments from the talks I listen to, and where possible share them ‘live’ and as is.

Presentation by Stefan Frei and Francisco Artes from NSS Labs.

 

The risk is much larger then people thought.  It is more like the 800 pound ‘cyber gorilla’ than the chimpanzee.. And to make things worse it is a whole field of these ‘cyber gorillas’.

 

It’s not just about digital data theft;

  • Destruction / alteration of digital assets
  • Interruption to applications, systems and customer resources
  • Single points of data
  • AV vendors only focus on defending mass market applications
  • Geo location – access from anywhere for users and hackers

 

Do we understand our defences?

  • Network – Firewall, IPS (Intrusion Prevention System), WAF (Web Application Firewall), NGFW (Next Generation Firewall), Anti APT (Advanced Persistent Threat) etc. etc.
  • Host – AV (Anti Virus), Host FW, Host IPS, Host zero day, application controls etc. etc.
  • Different vendors often used due to perception that 2 vendors

 

What about indirect attacks, such as browser and application based?

 

How effective are your defences?

 

How do we maintain the balance between security and usability?

How do we assess the security of our solutions?

How do we report on this with metrics that are meaningful to the board?

 

Threat modelling can be a useful tool here.

 

Live modelling solutions (such as those done by NSS labs) can be used to model differnect tools from different vendors in an environment broadly similar to yours; (NSS example)

 

  • Pick your applications and operating systems
  • Pick your broad network design
  • Pick the security solutions and where they are placed.

 

Devices each tested with >2000 exploits, thus when you choose different devices you can see where the exploits would be caught or missed, so for example you could layer brand X NGFW, with brand Y IPS, and brand Z AV.  The ‘live’ threat model would then map the exploits that each device missed, so you can see if any would pass all the layers in your security.

All tests were done with the devices tuned as per manufacturers recommendations.

  • For IPS the vendors had experts tune them, this lead to a 60-85% increase in IPS performance.  This point is very interesting outside to this talk – IPS devices MUST be tuned and maintained for them to deliver value and protection.  Do you regularly tune and maintain IDS / IPS devices in your environment?

 

Report / live threat modelling also differentiates between automated attacks vs. hand crafted ones.  This highlights how many attacks could relatively easily be launched by anyone with basic skills in free tools such as Metasploit.  This raises the question why security tool vendors can’t at least download exploit tool kits and their updates to ensure their tools can at least prevent the available pre-packaged attacks!

 

This is definitely a useful tool, and whether NSS or similar I can recommend you undertake some detailed threat modelling of your environment.  This type of service allows you to perform much more ‘real’ technical threat modelling rather than just doing theoretical attack scenarios which is as far as most threat modelling exercises seem to go.

 

What is the threat environment?

Many experts writing tools and exploits.

A huge number of people with limited skills utilising free and paid for tools created by the exports – this increases the threat exponentially – anyone can try the free tools, anyone with even limited funds can purchase the paid for tools (often around $250).

 

The maturing threat landscape;

there is now a thriving market for underground hacking / attack tools.  This has matured and now offers regularly patched software with patching cycles, new exploits regularly added, and even full support with email and sometimes phone based support desks.

The vendors of these hacking tools even offer guarantees around how long exploits will work for and evade security tools.

These are often referred to as Crimeware Kits.

 

In the tests by NSS labs, no device detected all exploits available in these tools, or in the free tools.

 

This is the continuing problem for businesses and the security industry – they are always playing catch up and creating tools / solutions to deal with known threats, rarely the unknown threats.

 

Another interesting finding was in a recent test of NGFWs where combinations of two vendors were used in serial, no one pair prevented all exploits tested.  However careful and planned pairing does improve security.  However this needs to be tested and planned, choosing two vendors at random is the wrong way to do this.  How many businesses currently have separate FW or NGFW vendors at different layers of the network?  How many of these actually researched the exploits that get through these and chose the solutions for the maximum protection vs. choosing two different vendors without doing this research?

 

Security vendors will always be playing catch up, however threat modelling can help ensure you choose the best ones for your environment.

Threat modelling will also help choose the best investments to improve security.

As an example a business who worked with NSS was about to invest >$300M on NGFWs across their environment.  The threat modelling highlighted that this wouldn’t add a huge amount of security due to a Java issue on all their sites and machines.  They could invest (and did) more like £3M on migrating the app to HTML5 and removing Java from their environment.  This created a much more secure environment for a mush smaller investment.

 

Threat modelling can also include geo-loaction and which vendors work best in which locations as well as just looking at the technologies.

 

Final point was a reminder that as no tools will prevent everything, see must assume we have been ‘owned’ (breached) and act accordingly.  This must not be an exception process, we must search for and respond to breaches as part of our security business as usual process.

 

If you are not performing live threat modelling, I’d highly recommend you start as this is a great way of assessing your current security posture, and also very useful for planning you next security investments to ensure they provide the greatest value and also measurably improve your security posture.

Overall, this was a very informative talk that while demonstrating their product / service managed the stay fairly clear of too much vendor speak and promotion while still highlighting the clear benefits of ‘live threat modelling.

K

Gartner Security and Risk Management conference – Software Defined Networking

This was an introductory talk around Software Defined Networking (SDN) and some of it’s security implications.

What is it?

  • Decoupling the control pane from the data plane and centralising logical controls
  • Communication between network devices and SDN controllers is with both open and proprietary protocols currently – no single standard..
  • SDN controller supports open interface to allow external programability of the environment

– Controller tells each node how to route, vs. current where each node makes it’s own routing decisions.

 How do I enforce network security in an SDN environment?

  • Switch as the Policy enforcement point
    • Switch tells controller it’s seen traffic with certain flow characteristics, Flow controller tells it what to do with the flow, and this information is cached in the local flow table for a specified time.  Another flow arrives and this one is not permitted, so the controller tells the switch to just drop the packets – switch effectively becomes a stageful firewall.
    • Existing controls such as DLP, Firewalls, Proxy servers etc. can all be used with SDN –
      • e.g. someone tries to connect to the internet – flow controller instructs switch to send traffic to the firewall / IPS / DLP server etc.
      • e.g. sending email – no matter where it’s going flow says first point is DLP, then firewall, then onto destination
      • This means devices no longer need to be inline – they can be anywhere on the network.  Flow controller just needs to know where to send certain traffic types!
    • Incoming flows can be treated in the same way
      • Something changes – such that it looks like DDoS – traffic can be routed to the DDoS protection device(s)

What risks does SDN introduce?

  • Risk is aggregated in the controller
    • Malicious or accidental changes could remove some or all of the security protections
  • Integrity of of the Flow Tables must be maintained
    • Switches etc must be managed from controller, not locally
  • Input from applications must be managed and prioritised
    • Application APIs are non standard
    • Who gets precedence?
      • Load balancer vs. security tools when defining traffic flows?

SDN products do exist now.

  • Standards do exist
    • OpenFlow – maintained by Open Networking Foundation
  • Network devices (early days)
    • Open vSwitch
    • Some products from Brocade, Cisco, HP, IBM
  • Controllers (limited maturity)
    • Floodlight (open source)
    • Products from Big Switch Networks, Cisco, HP, NEC, NTT Data, VMware
  • Applications (often tied to specific controllers)
    • Radware and HP produce some security applications

Recommendations;

  • Do not overreact to SDN hype
  • Combine IT disciplines when implementing SDN
    • Don’t forget security!!
  • Determine how existing control requirements can be met with SDN
  • Examine how SDN impacts separation of duties
    • Some similar issues to vitalisation
  • Discuss SDN with your existing security vendors
  • Deploy SDN in a lab or test environment
    • PoC and understand fully before deploying

 

Overall this was an informative and fast paced talk.  As per the speakers recommendations, SDN is a very interesting technology, although it is still in the emerging phase with the majority of deployments currently being in testing or academia.  I wouldn’t yet recommend it for production Datacentre deployments, but I would recommend you become familiar with it, especially if you work in the networking or security fields.

 

K

Splunk Live!

I attended the Splunk Live! London event last Thursday.  I am currently in the process of assessing Splunk and it’s suitability as a security SIEM (Security Information and Event Management) tool in addition to general data collection and correlation tool.  During the day I made various notes that I thought I would share, I’ll warn you up front that these are relatively unformatted as they were just taken during the talks on the day.

Before I cover off the day, I should highlight that I use the term SIEM to relate to the process of Security Information and Event Management, NOT SIEM ‘tools’.  Most traditional tools labelled as SIEM as inflexible, do not scale in this world of ‘big data’ and are only usable by the security team.  This for me is a huge issue and waste of resources.  SIEM as a process is performed by security teams every day and will continue to be performed even when using whatever big data tool of choice.

The background to my investigating Splunk is that I believe a business should have a single log and data collection and correlation system that gets literally everything from applications to servers to networking equipement to security tools logs / events etc.  This then means that everyone from Ops to application support, to the business to security can use the same tool and be ensured a view encompassing the entire environment.  Each set of users would have different access rights and custom dashboards in order for them to perform their roles.

From a security perspective this is the only way to ensure the complete view that is required to look for anomalies and detect intelligent APT (Advanced Persistent Threat) type attacks.

Having a single tool also has obvious efficiency, management and economies of scale benefits over trying to run multiple largely overlapping tools.

Onto the notes from the day;

Volume – Velocity – Variety – Variability = Big Data

Machine generated data is one of the fastest growing, most complex and most valuable segments of big data..

 

Real time business insights

Operational visibility

Proactive monitoring

Search and investigation

Enables move from ‘break fix’ to real time operations insight (including security operations). 

GUI to create dashboards – write quires and select how to have them displayed (list, graph, pie chart etc.) can move things around on dashboard with drag and drop.

Dev tools – REST API, SDKs in multiple languages.

More data in = more value.

My key goal for the organisation – One log management / correlation solution – ALL data.  Ops (apps, inf, networks etc.) and Security (inc PCI) all use same tool with different dashboards / screens and where required different underlying permissions.

Many screens and dashboards available free (some like PCI and Security cost)  dashboards look and feel helps users feel at home and get started quickly – e.g. VM dashboards look and feel similar to VMware interface.

another example – windows dashboard – created by windows admins, not splunk – all the details they think you need.

Exchange dashboard – includes many exchange details around message rates and volumes etc, also includes things like outbound email reputation

VMware – can go down to specific guests and resource use, as well as host details. (file use, CPU use, men use etc.)

Can pivot between data from VMware and email etc. to troubleshoot the cause of issues.

These are free – download from spunkbase

Can all be edited if not exactly what you need, but are at least a great start..

Developers – from tool to platform – can both support development environments and be used to help teach developers how to create more useful log file data.

Security and Compliance – threat levels growing exponentially – cloud, big data, mobile etc. – the unknown is what is dangerous – move from known threats to unknown threats..

Wired – the internet of things has arrived, and so have massive security threats

Security operations centre, Security analytics, security managers and execs

  • Enterprise Security App – security posture, incident review, access, endpoint, network, identity, audit, resources..

Look for anomalies -things someone / something has not done before

  • can do things like create tasks, take ownership of tasks, report progress etc.
  • When drilling down on issues has contextual pivot points – e.g right click on a host name and asset search, google search, drill down into more details etc.
  • Even though costs, like all dashboards is completely configurable.

Splunk App for PCI compliance – Continuous real time monitoring of PCI compliance posture, Support for all PCI requirements (12 areas), State of PCI compliance over time, Instant visibility on compliance status – traffic lights for each area – click to drill down to details.

  • Security prioritisation of in scoop assets
  • Removes much of the manual work from PCI audits / reporting

Application management dashboard

  • spunk can do math – what is average stock price / how many users on web site in last 15 minutes etc.
  • Real time reporting on impact of marketing emails / product launches and changes etc.
  • for WP – reporting on transaction times, points of latency etc – enable focus on slow or resource intensive processes!
  • hours / days / weeks to create whole new dashboards, not months.

Links with Google earth – can show all customer locations on a map – are we getting connections from locations we don’t support, where / what are our busiest connections / regions.

Industrial data and the internet of things; airlines, medical informatics (electronic health records – mobile, wireless, digital, available anywhere to the right people – were used to putting pads down, so didn’t get charged – spunk identified this).

Small data, big data problem (e.g. not all big data is a actually a massive data volume, but may be complex, rapidly changing, difficult to understand and correlate between multiple disparate systems).

Scale examples;

Barclays – 10TB security data year.

HPC – 10TB day

Trading 10TB day

VM – >10TB year

All via splunk..

DataShift – Social networking ‘ETL’ with spunk. ~10TB new data today

Afternoon sessions – Advanced(isn) spunk..

– Can create lookup / conversion tables so log data can be turned into readable data (e.g. HTTP error codes read as page not found etc. rather than a number)  This can either be automatic, or as a reference table you pipe logs through when searching.

– As well as GUI for editing dashboards, you can also directly edit the underlying XML

– Can have lots of saved searches, should organise them into headings or dashboards by use / application or similar for ease of use.

– Simple and advanced XML – simple has menus, drop downs, drag and drop etc.  Advanced required you to write XML, but is more powerful.  Advice is to start in simple XML, get layout, pictures etc sorted, then convert to advanced XML if any more advanced features are require.

– Doughnut chart – like a pie chart with inside and outside layers – good if you have a high level grouping, and a lower level grouping – can have both on one chart.

– Can do a rolling, constantly updating dashboard – built in real time option to refresh / show figures for every xx minutes.

High Availability

  • replicate indexes
    • gives HA, gives fidelity, may speed up searches

Advanced admin course;

http://www.splunk.com/view/SPCAAAGNF

Report acceleration

  • can accelerate a qualifying report – more efficiently run large reports covering wide date ranges
  • must be in smart or fast mode

Lots of free and up to date training is available via the Splunk website.

Splunk for security

Investigation / forensics – Correlation, fast to root cause, look for APTs, investigate and understand false positives

Splunk can have all original data – use as your SIEM – rather than just sending a subset of data to your SIEM

Unknown threats – APT / malicious insider

  • “normal” user and machine data – includes “unknown” threats
  • “security” data or alerts from security products etc.  “known” security issues..   Misses many issues

Add context  – increases value and chance of detecting threats.  Business understanding and context are key to increasing value.

Get both host and network based data to have best chance of detecting attacks

Identify threat activity

  • what is the modus operandi
  • who / what are most critical people and data assets
  • what patterns and correlations of ‘weak’ signals in normal IT activities would represent abnormal activity?
  • what in my environment is different / new / changed
  • what deviations are there from the norm

Sample fingerprints of an Advanced Threat.

Remediate and Automate

  • Where else do I see the indicators of compromise
  • Remediate infected systems
  • Fix weaknesses, including employee education
  • Turn the Indicators of Compromise into real time search to detect future threats

– Splunk Enterprise Security (2.4 released next week – 20 something april)

– Predefined normalisation and correlation, extensible and customisable

– F5, Juniper, Cisco, Fireeye etc all partners and integrated well into Splunk.

Move away from talking about security events to all events – especially with advanced threats, any event can be a security event..

I have a further meeting with some of the Splunk security specialists tomorrow so will provide a further update later.

Overall Splunk seems to tick a lot of boxes and looks certainly taps into the explosion of data we must correlate and understand in order to maintain our environment and spot subtle, intelligent security threats.

K