Convenience and security.. when you loose a factor..

I noticed recently that the application I use for securely accessing email and other office related things had been updated to accept my fingerprint instead of a PIN.

While this is indeed convenient and it saves me the 2 seconds of inputing my 6 digit PIN, however it does mean that an authentication ‘factor’ has effectively been lost.  Previously I used my fingerprint to unlock my phone, then a unique PIN to access the email application.

Stepping back briefly, for those not familiar with authentication, when we refer to factors we are basically talking about different ways of authenticating yourself.  These are usually split into;

  • Something you KNOW – e.g. a password / passphrase / PIN
  • Something you ARE – e.g. biometric things like finger prints, voice recognition etc.
  • Something you HAVE – e.g. your bank card / credit card, a ‘token’ that generates pseudo random numbers or a device like a phone (but do you really trust your phone to be secure as much as a credit card?..)

Things like risk based authentication based on combinations of your and your devices behaviour may also be considered, although standards like the upcoming PSD2 (Payment Services Directive 2) don’t yet formally consider these a factor.  I personally believe that they should be, but this is really a decision for your organisation around how much they trust different forms of authentication.

As a side note, authentication is, and should be considered more of a ‘shades of grey’ rather than ‘black and white’ exercise.  What I mean by this is – for any given action do we trust enough that dave is dave in order for him to proceed?  If not then we should request further authentication such as another factor or further information in order to allow the next action.

So back to the factors, and why they are important.  Many sites may have a variety of ways of authenticating you like passwords, PINs, the challenge / response questions such as what was your first school, identify the image you chose previously… – the issue here as you have no doubt worked out is that these are all ‘something you know’.  So no matter how many questions a site asks you, it is still single factor authentication.

The reason why we prefer multi-factor authentication vs. single factor is it is much less likely that a malicious actor will have access to multiple factors.  For example they may find your password, but not have your fingerprint, or they may have a copy of your fingerprint from your device, but not know your PIN etc.

This is why I question the reliance by many applications, including financial ones on just asking for your finger / thumb print again.  While I recognise that the device (phone in this case) may be considered a factor in terms of being something you have, I question how much we can trust such an untrusted device as an authentication factor.  Is this a tiny extra bit of convenience at the expense of security?

Were other capabilities such as device and behaviour analysis are not in play, then yes I believe it is a loss of security.

The caveat here brings us back to my earlier point, regardless of the other factors in play, we should and must make use of the rich data we have about user and device behaviour. By these I mean the status of the device / browser accessing our systems.  Can we detect malware?  Where have we seen the device before? When? what else do we know about it? – O/S versions, software etc.  Similarly for the individual, what are their usual behaviours on our systems?  How much do they purchase? What devices do they usually access them from? etc.

What is the point of this discussion?

  • Consider convenience vs. security
  • Play close attention to the components and factors in your authentication flow; If you consider a device ‘something the user HAS’ do you really trust it enough?
  • Make use of risk based and step up authentication; When the users is performing low risk activities, allow a seamless path.  When they want to perform something higher risk, step up the authentication and ask for more information such as another factor.
  • Possibly most importantly, make use of the risk data you can gather about the user and their device(s) in order to make the most informed decisions that balance risk vs. convenience and ‘the happy flow’ through your systems

 

It would be great to hear your thoughts on this topic!

K

 

 

CrestCon and IISP congress – Dr Ian Levy presentation

Today I attended the CrestCon and IISP congress.  One of the keynote presentations was by Dr Ian Levy the technical director of the NCSC (National Cyber Security Centre).  This was titled ‘NCSC – WTF’.  It was a very interesting and refreshingly forthright talk, so I thought I would share it!  He covered a lot of the work and plans of the NCSC along with some of his personal thoughts.

My notes from the presentation are below, I have included various links for ease of reference, and definitely recommend reading the materials they lead to.

 National Cyber Security strategy 2016-2021

    • Basically – information sharing is not enough, get off your arse and do something about it! (his words 😉 )

NCSC;

  • Should be single government / legal point of contact you got to for anything Cyber.
  • A different sort of agency
    • Collaborate with the NCSC – Secondments for Cyber experts to work with and help the country’s cyber security

 

APTs are in the press a lot, however lets be honest;

    • Anatomy of most unprecedented, sophisticated cyber attacks;
      • Attacker does a bit of research
      • Attacker sends a spear phishing email to an admin
      • Admin opens email using admin account, exploiting unpatched stuff
      • Attacker does nefarious stuff as admin
      • Monitoring does not work
      • Attacker takes data or changes data
      • Profit

Most APT is not APT at all.  Is the focus correct? APT – less Advanced Persistent Threat – more likely Adequate Pernicious Toe-rag.. (heard this before, not sure who first coined the term..)

 

XKCD – Security tips cartoon!  Highlighting that some security advice is not always the best..

 Some general thoughts;

    • Admins must not browse the web or use email with admin account – if you still allow this, you should get a new job..
    • Have a different, complex password for each system you use – stupid advice!
    • (not)Awesome advice – Don’t open an attachment unless you trust the email.. – How do people ‘trust an email’???
      • If you own an email domain and don’t use DMARC you should be ashamed..
      • NCSC have open sourced their DMARC management solution – could we use this rather than paying for something?  They even have a dashboard that will be open source soon.
      • https://www.ncsc.gov.uk/blog-post/open-sourcing-mailcheck

 

The NCSC is trying to reduce harm by asking nicely – automatically asking ISPs and hosting providers to take down malicious sites

 

Recursive DNS is my friend.

    • Hosting their own DNS – moving all public sector organisations to using the NCSC DNS – they will automatically not provide details for known bad sites / services so unless you connect by IP you just wont get to them

 

NCSC – Active Cyber Defence programme.  This provides a great overview of many of their initiatives and how they hang together;

https://www.ncsc.gov.uk/blog-post/active-cyber-defence-tackling-cyber-attacks-uk

 

Read Understanding Uncertainty – ‘Medicine, poison, poison, poison’;

 

Goal for NCSC – From fear to published evidence and analysis

    • So you can target you security strategy and spending appropriately!

 

Keep security advice basic, brief and relevant

E.g. 5 tips for email.. 5 tips for phones etc. something like – encrypt, keep up to date, use a pin, don’t jailbreak, only install apps from google play / apple store.

 

‘Hacking back’ / Offensive security – his opinion

    • Should be reserved for government, potentially not legal for private firms.
    • Must be very organised, concerted effort.  Attribution is very hard..
    • Any private company doing this is mad, due to potential repercussions.

If you have any questions I’ll try to answer them, but I hope you have found this and the links interesting!

K

Morals and economic issues of ‘seamless’ payments; some thoughts.

Slight departure from the usual security fare this post, but hopefully you’ll find it interesting!

This week I attended the ‘Cards and Payments’ summit.  This was pretty interesting, and it was certainly good for me to attend a conference not purely focussed on security to see what the wider payments industry is talking about at the moment.

I’ll provide a brief overview in another post, but I wanted to write my entirely non security and non technical thoughts on a particular topic that was discussed numerous times over the two days;  How to make payments as seamless, transparent and friction free as possible.

On the face of it this seems like a great idea.  Who wouldn’t want to be able to securely pay for goods and services without any friction or interruption to what they are doing?  Indeed I’m even involved in some work around how we can use things like device ID, location, behaviour etc. to improve security while lowering friction.

However the other side of this coin is the fact that people have been proven to spend far more as they get further from transferring actual cash to someone else.  Since the inception of credit and debit cards, people in properly controlled studies spend more, will value the same good higher and will tip more with a card vs. making a cash payment.

This trend further continues as you move online, the more transparent payments are and the less involvement the consumer has in the payment process, the more likely they are to spend.

When you consider this fact in the overall picture of many countries where people have a clear propensity to overspend and carry more debt than they can manage, is this trend a good thing?

From a moral perspective should we really be creating ways that have been proven to psychologically increase spending when many people are already in a lot of financial difficulty?

You could of course argue that people need to be responsible for themselves, which is an opinion I often tend towards.  However I think industries do need to be held to some level of responsibility for their customers, especially when there are clear and impartial studies highlighting the risk and psychological triggers that are being used to change behaviours.

On a macro level I would also argue that the economy as a whole would be better off in the long term if consumers are managing their money better as they will always have money to spend.  The reality of ongoing over spending is longer term economic troubles.

One of the presenters who was promoting the benefits of completely seamless payments with seemingly no controls on how much you spend was from sky <betting and gaming.  He unsurprisingly disagreed with me and spoke of making the process as seamless and excellent as possible.  This seems particularly dangerous as they are clearly combining potential habit and addiction issues with technology designed and proven to make people overspend..

To be fair to him he did mention having other things they offer to help with gambling problems, but he was very clear these should be separate from the actual gambling and payments process – which does kind of miss the point in my opinion.

What do you all think?

Is some affirmative friction a good thing in payments?

Should business have some obligation to look out for its customers rather than just doing everything to make them spend?

Regardless of the moral question, should businesses have some view to the longer term health of the economy?

If not business, is regulation the only answer to drive good behaviours from them?

 

It would be great to hear your thoughts!

We’ll be back to security stuff for my next post..

K

Securing Connected Cars..

A relatively short post.. hopefully some car manufacturers are reading..

IoT security and car / vehicle security are hot topics at the moment.  From the Jeep hack to Tesla, there have already been examples where cars can be ‘hacked’.  These have demonstrated that control can be taken, not just of relatively benign functions like climate or the stereo, but of actual ‘car’ controls like steering and brakes.

In addition to this, if you use, or read reviews of most cars entertainment / infotainment systems, they are pretty poor in terms or UI and capabilities.  Hands up if the maps on your phone beat your cars GPS / navigation system.

This seems to be a clear symptom of manufactures wanting to have their cake and eat it.  What I mean is minimal changes to the architecture, implementation and security of the software and hardware (computer) that runs the car while simultaneously wanting to connect it to the internet for clever features and updates.

In the world of mobile phones, and indeed traditional computing there is a concept of trusted or secure execution environments.  These vary in implementation, but the premise is a hardware protected trusted environment for executing sensitive activities while less sensitive activities run on the normal operating system and less secure / more open environment.

If you follow this blog you’ll have seen that I have actually argued we can make a software only solution more than secure enough for payments.  This however differs from cars on two very important ways;

  1. I propose we monitor the software at all times it is in use to ensure the payment is legitimate and secure.  I am not sure any car manufacturer wants to monitor the software in all its cars, all the time, in real time.
  2. People are unlikely to die.  This is not being overly dramatic; a failed payment or fraudulent payment likely involves a call to your bank and minor inconvenience.  If the ‘driving’ parts of your car can be hacked there is a very real risk of serious injury or the loss of life.

How do we solve this, and still provide convenience?

I propose that the car computer effectively be split into two discrete components.

The first being secure and dealing with anything to do with controlling the car such as the engine, brakes, steering etc.  This should be in secure environment that ideally can only be updated at a garage using a physical connection and certificates etc.  This could potentially be remotely updated, but that should be weighed against the risks.

The second being the ‘fun’ part.  This would include the whole infotainment system, music, climate, navigation* etc.  These components can then be updated remotely, ideally still with reasonable security such as encrypted communication and certificates etc.

This split would allow manufacturers to update the UI, navigation etc much more frequently with relatively low risk.

I’m hoping that car manufacturers will move in this or a similarly secure direction soon.  If they do not, I fear something bad will happen.  This will not only be bad for those involved, but will lead to strong regulation and prove (again) that companies must be regulated to do the right thing.

It’s time to stop hiding behind supply chain or what ever the excuse is and to protect your customers and the general public.  Either that or stop making connected cars!

Concepts similar to this likely apply to a wide range of IoT ‘things’.

K

 

*You could make an argument for not having navigation here, as it is possible to direct people the wrong way which could be dangerous, but I’d suggest less imminently dangerous, and I’m definitely not proposing no security for the ‘infotainment’ stuff!

 

Mitigating the Insider Threat / Insider Risk / People Risk – part 3

In this third part on the Insider threat / Insider risk / People risk series we move onto how we can manage this and prevent the risk from being realised.

Despite my concerns around the ‘insider threat’ terminology, I have kept it for the title as this is currently the most common term.

As I started writing this series my initial thoughts were that some of the ‘people / process’ areas would be the most important.  However as I have researched this area I’ve come to realise that some of the ‘people’ things may lack in value.  Some people areas like JML and IDAM (acronyms will be covered later) are indeed key, but only in conjunction with equally key technology capabilities.

While all related, for ease of reference I’ll split the ways we can work to prevent / mitigate the insider threat into ‘People Stuff’, ‘Process Stuff’, and ‘Tech Stuff’ .  While there will be some overlap, these categories I think cover the main areas.  This aligns with the standard security ethos of covering People, Process, Technology in order to secure an organisation.

The below is hardly an exhaustive list, but will hopefully get you thinking about the areas you need to focus on to secure your organisations systems and data.

 

People Stuff;

Develop a ‘secure culture’ with strong security awareness.  In line with wider security training, ensure everyone knows security is their responsibility.  This training should include helping people to know the signs to look for that may contribute to the risk someone could be an insider threat.  How to report these, and an awareness that it’s just as likely someone needs support and assistance rather than being malicious are important points to remember here.

Promote an open culture throughout the organisation.  It is OK to discuss concerns about yourself or others.  The organisation will always look to take positive, not negative steps to resolve potential issues.  It is expected to challenge someone if they don’t have a valid pass on display, no matter who they are..  Even the CEO..

 

Process Stuff;

The most important process area in order to mitigate the insider threat is JML (Joiners, Movers, Leavers) and ensuring all users have only the correct permissions to perform their current role.  Organisations often have reasonable ‘joiners’ and ‘leavers’ parts of the process, but many struggle with ‘movers’.  This is often highlighted when you look at the permissions of staff who have been with the organisation for sometime and through several roles.  It is not uncommon to find they have an accumulation of the permissions of all the roles they have performed, rather than just those required for their current position.

As a recommendation, RBAC (Role Based Access Control) where each identified role in the organisation has an approved permissions template is a better method than trying to copy a others users permissions in the hope they are correct.

There may be an argument for having periodic background checks on key staff in addition to the checks performed at the start of employment.  This is another area where many companies perform reasonable due diligence on employees prior to the start of employment, but then the checks are never performed again.  Personally I am not 100% convinced on this one as most checks are in reality not that in depth, and would only flag a concern at best – do we actually know how many insider threats are realised by someone who has more debt than before for example?  By all means do these, but ensure it is realised they are at best an indicator that risk may be increased, and will likely miss many people more likely to realise the risk.

Ensure key processes, especially those with material impact like moving money around are 4 or even 6 eyes processes.  This means that no one person can authorise certain transactions or processes, someone would initiate it and at least one other person would review and confirm.  These different people should not be in the same team to reduce chances of collusion.

Implement job rotation where it makes sense / is feasible as this reduces the chance of someone planning and committing fraudulent activity over a long period.  Some organisations also implement enforced periods of holiday, e.g. at least one 2 week block must be taken each year where there is no contact with business systems.  While not infallible this does ensure a period where someone else performs the role making it more likely discrepancies would be spotted.

 

Tech Stuff;

A first area to think about here would be how you can implement technology to support the above mentioned process improvements.  Examples would be Identity and Access management solutions to support the creation and use of business roles, and a solution to interrogate systems and report on existing permissions and group membership etc.

The next thing to realise is that ‘standard’ monitoring and controls likely do not cut it when you are trying to protect your systems and data from users and accounts that are legitimately permitted access.  It may be possible to pick up on some simple behaviours like an account attempting to access a lot of directories it is not permitted to, or port scans, or data exfiltration so large it impacts services.  However these would not be the most likely behaviours unless the insider / compromised account really was not trying to hide their tracks at all, in fact they would almost be trying to get spotted with actions like these!

In order to detect more subtle malicious behaviour, Some form of UBA – User Behaviour Analytics capability must be employed.  It should be noted that is is a relatively new area in the security space that is currently fairly high in the ‘hype cycle’.  As such considerable due diligence is recommended in terms of both clearly defining your requirements, and understanding the detailed capabilities of the solutions you assess.   Many companies are badging existing and new solutions as having UBA capabilities in order to capitalise on the current hype in this space.

To understand if an account is behaving in a potentially malicious manner, it is  critical to not only understand it’s actions in detail, but also to have some understanding of what is normal.  The best way to do this is to ensure there is an understanding of roles and teams within the organisation so that the solution can compare behaviours across groups that you would expect to perform similar actions.  Another key point here is that a lot of behaviour that could be malicious from viewing extra records to changing data may all happen within an application, so consider solutions that are able to integrate with your applications, or at least have a detailed understanding of your applications logging.

Other more ‘standard’ capabilities such as DLP, web proxies and email gateways can also play a role in both reducing the risk of insider threat, and also detecting it by ensuring their log files that detail user and system behaviour, web sites visited and emails sent are incorporated into the broad behaviour analysis capability.

On a final tech point consider some sort of secure browsing capability.  If you can prevent any malware from the web from even getting to your end points, and simultaneously prevent uploads to the web you will have dramatically reduced the risk from malicious users, phishing and account compromise.

 

I hope the above is useful guidance and thought provoking.  It would be great to hear your ideas and things you think are critical in minimising the risk from insiders and compromised accounts.

 

K

 

2017 Security Predictions and Themes

More of the same..

Simple attacks due to un-patched systems, mis-configurations, ‘standard’ app issues like SQL injection and Cross Site Scripting, phishing links etc. will continue to be the cause of the vast majority of breaches.

Advanced attacks will still make the headlines, even when just in terms of ‘it could have been xx nation using advanced methods’..  Advanced attacks will still be heavily promoted by vendors to sell products and services.

DDoS will continue to get bigger due to the increasing proliferation of insecure connected devices (cue first IoT reference!).

Big data and analytics will continue to be big.  Security use cases such as behaviour analysis across all the log data will continue to mature and start to show the value of “big data” from a security monitoring perspective.  Will need to work on moving from just behaviour monitoring in logs and alerting, to proactive blocking.  ‘Big data’ should start to become the ‘big brain’ that instructs the enforcement tools like IPS and end point agents (they will obviously continue to do their normal job as well).

IoT. I am waiting (note I don’t want there to be one!) for a serious incident in this space.  Not just the DDoS stuff, but actual direct harm to people from the hacking of cars or medical equipment.  This will shortly be followed by a LOT of knee jerk regulation.  No idea if this will happen in 2017 or later.  Unless something fundamental changes in how the devices covered in the wide IoT umbrella are developed, deployed and managed it will.

  • As a side note, we should stop just referring to IoT and start prefixing it with what we are actually referring to, in the same way as you have SaaS, IaaS, GovCloud etc. etc. for cloud ‘things’.  IoT is far to broad, and also has far too many different applications that will have vastly different security implications and requirements.

Blockchain.  Like IoT, no predictions list would be complete without something blockchain in it.  We are already seeing blockchain use cases expanding from currency to DRM and music management etc.  This will continue, it’s very much in the ‘hypecycle’ at the moment with everyone rushing to be at the front with use cases and ‘thought leadership’.  It would be great to see some really beneficial use cases – could a blockchain be used to track and guarantee that charity finances or food or medical supplies went to the right people?

Automation.  Combine environments that are becoming more complex and more dynamic (think DevOps, agile, containers, cloud etc.), increasing numbers of attacks, along with the much reported skills shortage and you have a perfect storm!  Automation will be key for organisations to stay secure.  Automating more of the basic security tasks will also enable better careers for the SecOps guys – they will have more time to focus on more advanced security issues and hunting for threats etc.

Simplification.  In a similar vein to the above, simplification must be a key strategy I’m talking from a security perspective, but this generally makes sense as well!  How many security conversations have started or ended talking about implementing a tool / solution?  We should be having more conversations about how we can rationalise the tooling we use.  How we can meet the security requirements of our organisation with the minimum set of tools and processes.  Thus with the maximum simplicity.

Likely millions of things will happen, that we can’t predict, but these are the current themes I am thinking about.

It would be great to hear your thoughts on the key security themes for 2017!

K

Mitigating the Insider Threat / Insider Risk / People Risk – part 2

This brief post is part 2 in the series on insider risk.  Here we will cover some of the reasons the ‘insider threat’ / ‘people risk’ can be realised.  This is critical to not only understanding how to monitor for and prevent incidents, but also to ensure the response is appropriate.

The aim of this post is to highlight the numerous different types of ‘insider threat’.  This will hopefully not only get you thinking about the ways this could manifest in your environment, but also why in the majority of cases a term other than ‘insider threat’ is likely more appropriate.

What different actions and causes can lead to the risk being realised?  To my mind there are several concerns in this space, all of which can lead to data loss, data corruption or system downtime.

Some examples of these are;

  • Accident – e.g. emailing the wrong person, incorrect data entry
  • Good intention; Unaware of the policies and rules – e.g. emailing work to personal email in order to complete on the train
  • Good intention; Aware of the policy and rules, as above but with known intent to break the rules.  This is still likely someone who does not want to cause harm, they are just prepared to knowingly break the rules in order to get things done
  • Compromised individual – e.g. being coerced or blackmailed
  • Bad intent – e.g. sending data out to sell, or changing data in the favour of a friends business.  This is the classic malicious insider, and the main example where the term ‘insider threat’ is most accurate.
  • Compromised account – e.g. social engineering, shared credentials etc.  While technically not actions performed by an insider, these will appear to be an insider as they will be acting on systems in the context of the compromised user account.

 

While the tools / capabilities / processes that mitigate these risks may be similar, understanding the intent and the outcome is critical to know how to remediate.

For example where colleagues are circumventing the rules in order to deliver results, the best course of action would likely be to understand there needs and provide a secure way of meeting them.

The most serious breaches will likely be related to compromised individuals, compromised accounts or malicious individuals.  However by far the most frequent issues will be related to users either making mistakes or trying to be efficient and work in the best way for themselves.

The next posts will cover some of the key ways we can mitigate this risk.  Despite my keen interest in technology, we’ll find that some of the most important and effective controls are related to people and processes such as user awareness training, JML processes, 4 eyes processes etc.  Strong technical controls around access, DLP and behavioural monitoring are also critical.

K