It’s human nature

We spend a lot of time in the security industry complaining about stuff. Two of the top complaints that I see and hear are;

  • Why do people keep investing in shiny new ‘silver bullets’ when they have not yet achieved the above…

It’s easy to blame ‘the industry’ and ‘security teams’ for this. However while this may be true, we also need to recognise that it is human nature at play.

As is often the case – the security industry isn’t as special or unique as we like to think. It’s likely to become a theme, but I see so many parallels between the health and fitness industry and the security industry.

One the one side you have human nature and the seemingly ever increasing number of people looking for the quick fix.

This is fed by an industry that thrives on selling the magic pills and powders to get you in shape with little or no effort. Or the 20 minute ab workouts etc.

The outcome for consumers is millions of people not happy with themselves and not getting healthy despite spending their hard earned cash and following the advice from magazines and influencers.

The outcome from an industry standpoint is huge profits selling supplements, workouts and health advice that is mostly bullsh*t, while raking in the profits and keeping customers on the merry-go-round.

This is made worse by huge amounts of ‘peer pressure’ from social media and advertising etc making people believe they are not good enough if they don’t achieve the carefully curated and sometimes outright fake imagery in adverts and on social media.

In reality the solution is simple, but not easy. It you want to be fit, healthy and resilient, eat well, do the fitness… Consistently. Every week, every year.

Contrast this with the security industry.

On the consumer side we want immediate and easy results so are tempted by the latest shiny advertising stating how solution X will solve our problems. Our jobs are hard so it’s understandable people can be tempted by promises of an easy solution to securing our organisations.

On the industry side we have so many companies trying to sell the dream. Whether this be with products that frankly don’t work properly, or by completely mis-selling things that may be good once you have a high level of maturity, but are next to useless if deployed before all your fundamental security is in order.

We also have to realise that for most companies selling ‘security’, as with the fitness industry, they never want us to be ‘done’ or ‘satisfied’ as they need us to keep jumping on the next product bandwagon or the next Gartner magic quadrant.

This is again made worse by ‘peer pressure’ either from satisfied customers telling us how the solution fixed their issues, or from our own exec teams asking why we have not deployed solution X after they saw an advert in the FT or economist. That’s right, security firms are not adverse to advertising their solution to non security folk in the hope that will apply more pressure to security teams.

Again, as with your own health and fitness, the solution is relatively simple, but not at all easy.

Work with your teams and wider organisation to build foundational security. Do this consistently, with rigour day after day, year after year. This will get your organisation into a great place. Then if you want to, and your risk appetite requires it you can assess the shiny ‘advanced’ security magic to layer on top of your solid foundations.

So next time you’re tempted by a shiny new amazing security solution, don’t beat yourself up. But try to stay the course – assess what your organisation needs and ruthlessly focus on the fundamental / foundational security you need.


Foundations or Fundamentals. NOT Basics

A short one, but important.

We often talk about not doing the basics.

Organisations being breached due to failing to implement the basics.

We ask why have we still not get the basics sorted.

They are not basic.

The critical security things we need to always get right from patching to managing user rights should be considered the foundations of good security or the fundamental controls.

Without a strong foundation no security programme will deliver what is required.

While foundational, they should not be considered easy.

Take patching as an example. Ensuring a fully patched environment across 1000s or more servers, network devices, office devices, end points etc. without impacting availability and likely while liaising with many different teams. This is not as easy as it sounds when you just say ‘patch your environment’

So yes as an industry, and as organisations we must do better, but we must also recognise the size of the challenge and the focus required to achieve the goal. We must achieve foundational security across our organisations not just at a point in time, but consistently, efficiently and on an ongoing basis.

To achieve this we need to help our boards and leadership teams understand the scale of the issue, and the reasons why it is important. We must engage across our organisations to ensure secure processes are embedded across our teams.

Calling this basic doesn’t help people understand it is anything but.


Convenience and security.. when you loose a factor..

I noticed recently that the application I use for securely accessing email and other office related things had been updated to accept my fingerprint instead of a PIN.

While this is indeed convenient and it saves me the 2 seconds of inputing my 6 digit PIN, however it does mean that an authentication ‘factor’ has effectively been lost.  Previously I used my fingerprint to unlock my phone, then a unique PIN to access the email application.

Stepping back briefly, for those not familiar with authentication, when we refer to factors we are basically talking about different ways of authenticating yourself.  These are usually split into;

  • Something you KNOW – e.g. a password / passphrase / PIN
  • Something you ARE – e.g. biometric things like finger prints, voice recognition etc.
  • Something you HAVE – e.g. your bank card / credit card, a ‘token’ that generates pseudo random numbers or a device like a phone (but do you really trust your phone to be secure as much as a credit card?..)

Things like risk based authentication based on combinations of your and your devices behaviour may also be considered, although standards like the upcoming PSD2 (Payment Services Directive 2) don’t yet formally consider these a factor.  I personally believe that they should be, but this is really a decision for your organisation around how much they trust different forms of authentication.

As a side note, authentication is, and should be considered more of a ‘shades of grey’ rather than ‘black and white’ exercise.  What I mean by this is – for any given action do we trust enough that dave is dave in order for him to proceed?  If not then we should request further authentication such as another factor or further information in order to allow the next action.

So back to the factors, and why they are important.  Many sites may have a variety of ways of authenticating you like passwords, PINs, the challenge / response questions such as what was your first school, identify the image you chose previously… – the issue here as you have no doubt worked out is that these are all ‘something you know’.  So no matter how many questions a site asks you, it is still single factor authentication.

The reason why we prefer multi-factor authentication vs. single factor is it is much less likely that a malicious actor will have access to multiple factors.  For example they may find your password, but not have your fingerprint, or they may have a copy of your fingerprint from your device, but not know your PIN etc.

This is why I question the reliance by many applications, including financial ones on just asking for your finger / thumb print again.  While I recognise that the device (phone in this case) may be considered a factor in terms of being something you have, I question how much we can trust such an untrusted device as an authentication factor.  Is this a tiny extra bit of convenience at the expense of security?

Were other capabilities such as device and behaviour analysis are not in play, then yes I believe it is a loss of security.

The caveat here brings us back to my earlier point, regardless of the other factors in play, we should and must make use of the rich data we have about user and device behaviour. By these I mean the status of the device / browser accessing our systems.  Can we detect malware?  Where have we seen the device before? When? what else do we know about it? – O/S versions, software etc.  Similarly for the individual, what are their usual behaviours on our systems?  How much do they purchase? What devices do they usually access them from? etc.

What is the point of this discussion?

  • Consider convenience vs. security
  • Play close attention to the components and factors in your authentication flow; If you consider a device ‘something the user HAS’ do you really trust it enough?
  • Make use of risk based and step up authentication; When the users is performing low risk activities, allow a seamless path.  When they want to perform something higher risk, step up the authentication and ask for more information such as another factor.
  • Possibly most importantly, make use of the risk data you can gather about the user and their device(s) in order to make the most informed decisions that balance risk vs. convenience and ‘the happy flow’ through your systems


It would be great to hear your thoughts on this topic!




Mitigating the Insider Threat / Insider Risk / People Risk – part 3

In this third part on the Insider threat / Insider risk / People risk series we move onto how we can manage this and prevent the risk from being realised.

Despite my concerns around the ‘insider threat’ terminology, I have kept it for the title as this is currently the most common term.

As I started writing this series my initial thoughts were that some of the ‘people / process’ areas would be the most important.  However as I have researched this area I’ve come to realise that some of the ‘people’ things may lack in value.  Some people areas like JML and IDAM (acronyms will be covered later) are indeed key, but only in conjunction with equally key technology capabilities.

While all related, for ease of reference I’ll split the ways we can work to prevent / mitigate the insider threat into ‘People Stuff’, ‘Process Stuff’, and ‘Tech Stuff’ .  While there will be some overlap, these categories I think cover the main areas.  This aligns with the standard security ethos of covering People, Process, Technology in order to secure an organisation.

The below is hardly an exhaustive list, but will hopefully get you thinking about the areas you need to focus on to secure your organisations systems and data.


People Stuff;

Develop a ‘secure culture’ with strong security awareness.  In line with wider security training, ensure everyone knows security is their responsibility.  This training should include helping people to know the signs to look for that may contribute to the risk someone could be an insider threat.  How to report these, and an awareness that it’s just as likely someone needs support and assistance rather than being malicious are important points to remember here.

Promote an open culture throughout the organisation.  It is OK to discuss concerns about yourself or others.  The organisation will always look to take positive, not negative steps to resolve potential issues.  It is expected to challenge someone if they don’t have a valid pass on display, no matter who they are..  Even the CEO..


Process Stuff;

The most important process area in order to mitigate the insider threat is JML (Joiners, Movers, Leavers) and ensuring all users have only the correct permissions to perform their current role.  Organisations often have reasonable ‘joiners’ and ‘leavers’ parts of the process, but many struggle with ‘movers’.  This is often highlighted when you look at the permissions of staff who have been with the organisation for sometime and through several roles.  It is not uncommon to find they have an accumulation of the permissions of all the roles they have performed, rather than just those required for their current position.

As a recommendation, RBAC (Role Based Access Control) where each identified role in the organisation has an approved permissions template is a better method than trying to copy a others users permissions in the hope they are correct.

There may be an argument for having periodic background checks on key staff in addition to the checks performed at the start of employment.  This is another area where many companies perform reasonable due diligence on employees prior to the start of employment, but then the checks are never performed again.  Personally I am not 100% convinced on this one as most checks are in reality not that in depth, and would only flag a concern at best – do we actually know how many insider threats are realised by someone who has more debt than before for example?  By all means do these, but ensure it is realised they are at best an indicator that risk may be increased, and will likely miss many people more likely to realise the risk.

Ensure key processes, especially those with material impact like moving money around are 4 or even 6 eyes processes.  This means that no one person can authorise certain transactions or processes, someone would initiate it and at least one other person would review and confirm.  These different people should not be in the same team to reduce chances of collusion.

Implement job rotation where it makes sense / is feasible as this reduces the chance of someone planning and committing fraudulent activity over a long period.  Some organisations also implement enforced periods of holiday, e.g. at least one 2 week block must be taken each year where there is no contact with business systems.  While not infallible this does ensure a period where someone else performs the role making it more likely discrepancies would be spotted.


Tech Stuff;

A first area to think about here would be how you can implement technology to support the above mentioned process improvements.  Examples would be Identity and Access management solutions to support the creation and use of business roles, and a solution to interrogate systems and report on existing permissions and group membership etc.

The next thing to realise is that ‘standard’ monitoring and controls likely do not cut it when you are trying to protect your systems and data from users and accounts that are legitimately permitted access.  It may be possible to pick up on some simple behaviours like an account attempting to access a lot of directories it is not permitted to, or port scans, or data exfiltration so large it impacts services.  However these would not be the most likely behaviours unless the insider / compromised account really was not trying to hide their tracks at all, in fact they would almost be trying to get spotted with actions like these!

In order to detect more subtle malicious behaviour, Some form of UBA – User Behaviour Analytics capability must be employed.  It should be noted that is is a relatively new area in the security space that is currently fairly high in the ‘hype cycle’.  As such considerable due diligence is recommended in terms of both clearly defining your requirements, and understanding the detailed capabilities of the solutions you assess.   Many companies are badging existing and new solutions as having UBA capabilities in order to capitalise on the current hype in this space.

To understand if an account is behaving in a potentially malicious manner, it is  critical to not only understand it’s actions in detail, but also to have some understanding of what is normal.  The best way to do this is to ensure there is an understanding of roles and teams within the organisation so that the solution can compare behaviours across groups that you would expect to perform similar actions.  Another key point here is that a lot of behaviour that could be malicious from viewing extra records to changing data may all happen within an application, so consider solutions that are able to integrate with your applications, or at least have a detailed understanding of your applications logging.

Other more ‘standard’ capabilities such as DLP, web proxies and email gateways can also play a role in both reducing the risk of insider threat, and also detecting it by ensuring their log files that detail user and system behaviour, web sites visited and emails sent are incorporated into the broad behaviour analysis capability.

On a final tech point consider some sort of secure browsing capability.  If you can prevent any malware from the web from even getting to your end points, and simultaneously prevent uploads to the web you will have dramatically reduced the risk from malicious users, phishing and account compromise.


I hope the above is useful guidance and thought provoking.  It would be great to hear your ideas and things you think are critical in minimising the risk from insiders and compromised accounts.




Mitigating the Insider Threat / Insider Risk / People Risk – part 2

This brief post is part 2 in the series on insider risk.  Here we will cover some of the reasons the ‘insider threat’ / ‘people risk’ can be realised.  This is critical to not only understanding how to monitor for and prevent incidents, but also to ensure the response is appropriate.

The aim of this post is to highlight the numerous different types of ‘insider threat’.  This will hopefully not only get you thinking about the ways this could manifest in your environment, but also why in the majority of cases a term other than ‘insider threat’ is likely more appropriate.

What different actions and causes can lead to the risk being realised?  To my mind there are several concerns in this space, all of which can lead to data loss, data corruption or system downtime.

Some examples of these are;

  • Accident – e.g. emailing the wrong person, incorrect data entry
  • Good intention; Unaware of the policies and rules – e.g. emailing work to personal email in order to complete on the train
  • Good intention; Aware of the policy and rules, as above but with known intent to break the rules.  This is still likely someone who does not want to cause harm, they are just prepared to knowingly break the rules in order to get things done
  • Compromised individual – e.g. being coerced or blackmailed
  • Bad intent – e.g. sending data out to sell, or changing data in the favour of a friends business.  This is the classic malicious insider, and the main example where the term ‘insider threat’ is most accurate.
  • Compromised account – e.g. social engineering, shared credentials etc.  While technically not actions performed by an insider, these will appear to be an insider as they will be acting on systems in the context of the compromised user account.


While the tools / capabilities / processes that mitigate these risks may be similar, understanding the intent and the outcome is critical to know how to remediate.

For example where colleagues are circumventing the rules in order to deliver results, the best course of action would likely be to understand there needs and provide a secure way of meeting them.

The most serious breaches will likely be related to compromised individuals, compromised accounts or malicious individuals.  However by far the most frequent issues will be related to users either making mistakes or trying to be efficient and work in the best way for themselves.

The next posts will cover some of the key ways we can mitigate this risk.  Despite my keen interest in technology, we’ll find that some of the most important and effective controls are related to people and processes such as user awareness training, JML processes, 4 eyes processes etc.  Strong technical controls around access, DLP and behavioural monitoring are also critical.


Mitigating the Insider Threat / Insider Risk / People Risk – part 1

Managing  the risk from insiders, commonly referred to as the insider threat is in many ways more challenging than dealing with the more frequently discussed threat from external hackers.  This is because we are dealing with users / user contexts that already have authorised access to systems and data.

For clarity when I talk about the ‘insider threat’ I am not just referring to malicious insiders.  This also covers coerced / blackmailed insiders, and compromised accounts e.g. via social engineering where an attacker is able to act as a legitimate user on the network.

Technically a compromised insider account is not necessarily an ‘insider threat’.  However as the appearance will be the same, and the majority of the tools and processes to detect and prevent it will be the same, it makes sense to cover this in the same work.  Indeed without the correct capabilities in place, a compromised user account could well cast suspicion on a completely innocent colleague.

The above is the reason for the slightly long winded title of this post.  I’m not a fan of the term insider threat as it is pretty emotive and can lead to a sense of distrust.  We need a better name that conveys the fact we want to protect our colleagues as much as protecting our data.


When discussing this I often refer to ‘user context’ as in the predominantly logical world many of us live in it will be the users account that is misused in order to steal or change data.  Whether the account is being used by a malicious insider, or whether it has been compromised in some way, it is the misuse of the logical account that will lead to the data loss or corruption.


To the last point from the previous paragraph, when looking at the insider threat don’t forget it is about more than just data theft.  Consider all areas of insider misuse or compromise;

  • Could they affect availability?
  • Could they affect data integrity?


What makes this such an interesting as well as challenging area of security is that it you really have to bring together all aspects of security in order to manage the risk.  This includes physical, logical, HR policy and even broader topics such as corporate culture.


Just how do we deal with this complex issue?  One thing is for certain, despite my love for technology and innovation, this is not something that can be solved just with technical solutions!  You should not even start with these, without first covering considerable non technical work.


One of the first things to do is decide how to describe ‘insider risk’, and how to communicate this meaning to the wider organisation.  I would recommend using one of the many publically available descriptions as a basis, a good example being the US-CERT ( definition;

An insider threat is generally defined as a current or former employee, contractor, or other business partner who has or had authorized access to an organization’s network, system, or data and intentionally misused that access to negatively affect the confidentiality, integrity, or availability of the organization’s information or information systems


As with all security programmes, once you have defined the programme at a high level, understanding the assets and their value will define the types of controls and how much effort / budget should be expended on protecting them.  Identifying your assets – likely systems and data / information, will not only ensure you understand the value, but help ensure you are protecting the right things!

It is then important to understand both what you and do not want to do, along with what you are allowed to do.  Both and corporate culture and the legal environments you operate in will impact how intrusive any Insider threat programme can and should be.  Remember, if you are a global company this may mean you have to have some different policies in different regions, for example Germany is much much stronger on individual rights and privacy than the US for instance.


We then need to consider the various ways that we can manage the risk.  Many organisations have created programmes to manage this, from the Gartner 5 step basics;

  1. Build a Team, Identify a Champion.
  2. Identify Threats, War Game and Establish Goals.
  3. Achieve Stakeholder Buy-In.
  4. Establish Policies, Governance and Processes (Tech. Agnostic):
    1. Education and Deterrence Programs and Policies.
  5. Select and Implement Technology.

Or EY’s (quite American in focus, but still a relevant guide) 8 steps;

  1. Gain senior leadership endorsement, develop policies that have buy-in from key stakeholders and take into account organizational culture
  2. Develop repeatable processes to achieve consistency in how insider threats are monitored and mitigated
  3. Leverage information security and corporate security programs, coupled with information governance, to identify and understand critical assets
  4. Use analytics to strengthen the program backbone, but remember implementing an analytical platform does not create an insider threat detection program in and of itself
  5. Coordinate with legal counsel early and often to address privacy, data protection and cross-border data transfer concerns
  6. Screen employees and vendors regularly, especially personnel who hold high-risk positions or have access to critical assets
  7. Implement clearly defined consequence management processes so that all incidents are handled following uniform standards, involving the right stakeholders
  8. Create training curriculum to generate awareness about insider threats and their related risks

To US CERT and the Software Engineering Institute who have 18 and 19 step processes respectively!

I’d recommend reviewing various documents on this topic and tailoring the list to that which is most appropriate to your organisation.


In addition to the different ways we can mitigate the risk it can be useful to apply the ‘kill chain’ approach.  Much like the well understood cyber kill chain, there are similar ‘insider threat kill chains’.  By using these it is possible to demonstrate how the different steps can be applied to prevent the risk and different stages of the planning and implementation.

I’ll follow this post with some more detailed ones covering the various steps that can be taken to implement and run a comprehensive inside threat programme.  On final, and critical thought, for this to be successful, and for to ensure a positive corporate culture, the messaging and intent is critical;


‘we want to enable you to work securely’

‘we want to protect you should your account be compromise or misused’

Rather than;

‘we want to monitor you in case you steal data’!


As always it would be great to hear your thoughts.


References I would recommend for further reading include;



Securing IoT payments

There is a lot of discussion around IoT security, much focussed on patching, maintaining / updating etc etc.

Given the volume of discussion in this space I’ll not write something likely replicating other conversations.


What I am interested in is whether we can enable secure and trusted automated payments from IoT devices.  If we can solve this we can trust a lot of non payment behaviours as well.

Assuming we can improve those basics enough to make wider use of IoT devices safe (enough), payments will surely follow.  We may well see a growth in IoT driven payments before we are happy the IoT is safe enough – we are already seeing hackable cars and their associated mobile applications (  A lack of safety and security is clearly not holding back the IoT tide!


One of the benefits of consumer IoT devices is that they will be able to automatically order things.  Examples could be replacing themselves or components as they wear out, or restocking consumables as they run low – think of coffee machine buying coffee or fridge restocking etc.

Is it possible to simply and effectively secure (automated) payments from IoT devices? Or for that matter any device..

There are multiple potential issues including;

  • Did you authorise the payment?
  • Is the ‘thing’ really yours and acting on your behalf?
  • Where is the ‘thing’ located, and where should the goods be sent to?
  • Do you want / need what ever is being purchased?
  • How could malicious people;
    • Make money (cash out) from this?
    • Cause harm, and to what level? – from slight nuisance to real harm..


How can we mitigate the risk from these issues to enable secure IoT payments?


I’d propose that it is possible to do this, using a combination of three things;

  • Some rules and metadata about the device and what it is allowed to do
  • Certificates that link the device to you and an address
  • Something to make this data and all transactions immutable, such as a blockchain implementations


How would these work together?

For most consumer devices it will be relatively easy to set rules about the device in terms of what it is, and what it is allowed to do.  For a simple example, a light bulb can only order a single lightbulb to the address it is registered to.  For a slightly more complex example, a fridge could have rules around only being able to order items you have previously ordered and set as ‘replace me’, only to the registered address at agreed times, and only if there was space in the fridge for them.

As long as these rules are immutable, e.g. by being held in a blockchain, they chances of a criminal cashing out are extremely limited.  The ability to cause harm is also limited as you could potentially make a lightbulb order 1 lightbulb, or make the fridge order something you wanted replaced that would fit into the fridge..

Using an extremely scalable certificate management would allow identity and location to be stored with each device.  Consider something like a root cert and child certs model.  You are your own root cert, then all you devices get a child cert that links to you and has added information like address.  These could be managed, replaced and revoked as you would expect.  Securely managed certificates, potentially stored as part of the blockchain would enable the device (‘thing’) to be linked to the owner, location and by inference the owners payment instrument and permission to replace / order items.  The permissions associated with the device around what the owner has allowed it to do would also be stored in the blockchain.


By utilising relatively simples rules for each device, that the owner can set and agree, we are able to ensure it only performs sensible actions.

By using the existing certificate model, just in a massively scalable architecture we are able to link the devices to owners, locations and payment instruments.

Finally by utilising blockchain and it’s properties, we are able to immutably store these things, with clear permissions and a full audit trail for any changes and transactions.


I’ve obviously simplified this for the purposes of this blog post, but hopefully the idea is clear.  It would definitely be great to hear your thoughts on this.  I may write a longer more detailed overview and incorporating a wider range of inputs would definitely add value!



IoT does not equal IoT

I was at a PETRAS IoT (Internet of Things) event recently and a question I was asked at lunchtime got me thinking.

The question was;

“Do you think cloud is secure”

My response quite obviously was that the question needed a lot more context. Which cloud?  In what sense? Secure enough for what? Etc. etc.


We are falling into the same trap of thinking of IoT as a ‘thing’.  All IoT devices may share some traits, in the same way as the are certain traits a hosted service must have for it to be called a cloud service.

However all IoT devices clearly cannot and should not be lumped into one big category.


As my interest is in security I’ll use that as an example.

Consider the level of security required around a simple consumer device like a lightbulb.  It may have a few capabilities like on / off / dim and potentially being able to purchase one replacement lightbulb to your address.  You may also want some features in place to prevent actually logging onto it other than to perform on / off stuff, and to prevent it from enumerating your home network.

Now consider the security required around a medical device such as a pacemaker or insulin provider for a diabetic..  A while ago someone demonstrated they could hack a Bluetooth insulin device and make it release all of it’s insulin at once.  Obviously this was done while the device was not  connected to a person!

In the above examples, as long as there are some sensible rules in place, the threat vector from the lightbulb is very limited, and the value to criminals is effectively zero.

However in the healthcare example, an security issue could lead to immediate risk to life – imagine the scenario of pay xx bit coins or I affect your insulin supply, or stop your pacemaker.. – Thus demonstrating not only risk to life, but also a clear avenue to profit for the criminal.


We 100% need to work to improve the security and manageability of IoT devices across the board.  However we need to start segmenting this into different sectors and levels of threat / risk / value.


This will allow sensible dialogue about what is appropriate for different circumstances.  It is likely this will allow faster and appropriately secure progress.

For example if a framework for security and risk management of consumer devices such as lights, fridges, toasters etc. could likely be arrived at.  This would allow progress to be made in this space to provide consumers wider benefits from IoT, but without being mired in wider conversations about what is appropriate for healthcare or transport IoT  etc.


So this post has two points;

  • When something is massive and wide ranging such as cloud or IoT, it is fine to use this as a concept but we need to stop talking about them as a single thing when we think about security etc. as there is not a single solution or set of requirements.
  • IoT – we need to define distinct, but not too narrow, use cases, e.g. healthcare, consumer, transport etc.  Following this we can agree sensible and appropriate frameworks and requirements for things like security, management, payments..


I’ve been mulling over a high level concept for securing IoT payments and the consumer space, that I’ll flesh out and share in an upcoming post.  It would be great to hear your thoughts on this and how we can best manage / secure the various types and use cases of IoT.


Bruce Schneier keynote from the ISF conference

I recently attended, and presented at the ISF annual congress in Berlin.  One of the highlights of the conference was the keynote talk from Bruce Schneier.

The talk focussed on some of the current developments in IT, the internet, machine learning, IoT (Internet of Things), and what these may mean for IT security and basically everyone’s safety and security.

My notes from the talk are below, they are relatively rough, but I thought worth sharing as there are some great points and things to think about!

Internet now Senses, Sees and Acts – definition of a Robot?

Does this mean we are building a world size robot?

It’s a distributed robot…

Combination of;

Mobile, cloud, persistent computing, big data, IoT

And Autonomy..


This means – Computer security becomes Everything security…!

That means that all the things we understand from patching and vulnerabilities to security vs. complexity to network effects become relevant to everyone / everything.

As computers become more integrated with real life – medical, cars etc.  We likely move from confidentiality being the most important part of the security ‘triad’ to safety..

How do we deal with things like;

Algorithms that choose where police go or who gets parole?

How can we allow police to safely stop a car, vs. criminals being able to stop any car?


Tech / security arms races;

  • Spam
  • Click jacking
  • Ad blocking
  • Credit card fraud
  • ATM fraud


5 trends affect this security arms race (currently, may change in the longer term);

  1. Attack is easier than defence
    1. For a bunch of reasons, like complexity
  2. New vulnerabilities in the interconnections
    1. The more you connect things, the more vulnerabilities in one thing can affect another
    2. E.g. recent massive DDoS – was from cameras etc. – so vulnerabilities in these led to massive impacts elsewhere
  3. More critical systems mean more power to attackers
    1. Internet allows criminals to scale
    2. Allows attacks from anywhere / everywhere – e.g. I live in the UK, so don’t care about burglars living in Germany.  But with connected systems I can be attacked from anywhere.
    3. You don’t have to worry about the average attacker, you always have to worry about the best, as the best guy will be the one writing the tools..
  4. The economics of computer security don’t trickle down to the Internet of Things
    1. E.g. how do we secure and patch the billions of very low value devices
    2. Computers and phones – updated all the time, staff at MS etc employed just to patch
    3. Low cost embedded systems – written somewhere, dev / company moves on.  Some can’t even be patched.  So the only way to patch is to throw away and replace.  Is this a viable patch strategy?
    4. We also regularly replace things like phones and computers – this provides improved security and ensures updates.
    5. IoT stuff isn’t like this.  How often do you replace your DVR, your home thermostat etc?? 5 years, 10 years? Never??
    6. Owner and producer of these devices don’t care about the issues.
  5. Copy write laws, make it very hard to do security research on these devices
    1. It can be illegal to circumvent the security of these devices, even for research.
    2. Criminals don’t care, obviously.
    3. Criminals will do the ‘research’ and will hack the devices.
    4. Researchers likely will not do the work if they will be threatened and unable to publish the research..
    5. How will we ever improve?

How to fix this;

  • Do it right in the first place
  • Agile security- rapid prototyping, fix failures fast


Doesn’t work – Chrysler recalled >1M cars to update software

Does work – Tesla – remotely updated software of all cars


Technology and Law must work together or both will fail

Example – Snowden papers showed that technology could circumvent the law, as well as the other way round

Need clear government policies on this

Do we need a new regulator for this stuff?

What regulations do we need?

Does this need to be international, not national?

Governments will get involved, can we lead this to help drive sensible and usable regulations?


Main points

  • IoT changes everything – computers impacting the world in a physical manner
    • Less off switches
    • Not designed just growing
  • Threats getting worse in several dimensions
  • This is all coming, fast.  Government involvement is coming
  • We need to get ahead of this – we need to start making serious choices.  We need relevant, workable laws.  We have moral and ethical choices to make.
    • We need to change how we code.
      • When software didn’t matter we let developers code how they wanted and how they saw the world..  Bugs just get fixed later.
      • Now when lives more and more st stake we need society to decide what is OK, and hold developers to account.
  • We need to bring together policy makers and technologists!


Government response will be fast and likely unplanned – e.g. ransomware against cars – millions of people cant get into cars.  OR power plant goes offline.

This will lead to very fast and possibly badly thought out action, and regulations

Hence the need for us to get ahead of this!

We wont get to choose – once lives at stake you don’t get to decide if you’re regulated.  Airlines, drug companies etc.  Don’t get to say hay don’t regulate us..  Once internet / IoT etc as important as drug companies it will have now choice but to be regulated.


Do we really need to connect everything together?

E.g. could some systems (SCADA for example) connect to a SCADA only network?  Not a new internet, just secure / controlled networks for some systems?

Does believe we will solve this, but it is challenging 🙂  He is actually optimistic about this!


I’m sure you will agree, some great thinking points.  We live in very interesting times, IT security is going to become increasingly critical as more and more systems that genuinely and immediately affect life become connected to the same internet as everything else.

What are your thoughts?  Can we safely and securely enable all of these interconnected systems?




Low friction, secure online payments

Online payments whether made from a traditional PC or any mobile device must be secure, strongly resistant to fraud, and convenient.

Currently online payments suffer from a couple of key issues relating to ease of use and security;

·         Extra security features such as 3DS (3D Secure) provide a frustrating consumer experience.  This leads to consumers abandoning shopping carts and merchants disabling the feature where they are provided the option to do so.

·         False rejections of payments by the issuers, again this provides a terrible user experience and shopping cart abandonment.


Both of the above issues lead to frustrating situations.  Examples of these are when people forget their 3DS credentials, or when you call your bank to be told the rejection was because of the merchant, then the merchant says it was the bank!


In addition to this the upcoming EU rules on electronic payments authentication, how we verify that the person who is paying is the right person, are likely to add to this complexity.


These regulations are the Revised Payment Services Directive (PSD2).  They have three objectives: harmonization, innovation and security.

On security, PSD2 requires ‘strong customer authentication’ to be applied for all electronic payments in Europe.  Strong authentication in this case refers to using at least two of these three factors;

·         something you know such as a password,

·         something you have such as a card

·         something you are, for example, a biometric.


The EBA (European Banking Authority)  is responsible for the regulatory technical standards to deliver strong customer authentication.


The above issues and potentially increasing complexity leads to a poor experience and shopping baskets being abandoned.  This is due to either friction in the process or false rejections of payments by the issuers.


So how can this situation be improved upon? We need a solution that meets the needs of consumers, merchants and issuers as well as the intent of the proposed PSD2 regulations?

Breaking these down;


Consumers want a safe, seamless and reliable payments ecosystem.

Merchants want a safe, seamless and reliable payments ecosystem that maximises consumer spending and minimises fraud.

Issuers want a safe, seamless and reliable payments ecosystem that maximises consumer spending and minimises fraud.

The EU and EBA want a safe, seamless and reliable payments ecosystem that maximises consumer spending and minimises fraud.  Additionally they specify through PSD2 that we must verify that the payer is the correct person using ‘strong authentication’.


As you can see the needs of the majority of people in the payments ecosystem are basically the same, safe, seamless and reliable payments!


Can we solve this and provide a solution that will minimise fraud, improve acceptance rates while maintaining or improving the customer experience.  The short answer is YES.


By combining advanced authentication solutions with card details it is possible to provide strong assurance that a user and card are correctly linked and that a payment is genuine.


Utilising relatively simple code and an authentication solution fast enough to be in the online transaction flow enables us to reliably link a card to a device.  Note when I say device I include laptops / desktops as well as phones and tablets etc.


By doing this we can immediately identify multiple attributes about the card, device and behaviour such as;

  •  Have we seen this device and card combination successfully used before?
  • Have we seen the same name on a different card from this device before?
  • Does this behaviour align with previous successful payments from this combination such as volume, velocity, amounts etc?
  • Where were these payments made from?


This is in addition to all the traditional fraud analytics applied to the card behaviour alone.


3DS can still be incorporated if required, even with all this additional information.  However its use can be minimised by asking questions such as; 

  • Have we seen successful 3DS from this device and card combination within a predefined period? 
  • have we seen the same name on a different card from this device successfully authenticate with 3DS?

If so then trust this as if it was a 3DS payment.  This would enable the ability to provide the assurance of 3DS, while minimising it’s adverse impact.


This requires some innovation and for the issuers, schemes and processors to work together, along with the EBA recognising that this meets the intent of their proposed regulations.

What are the next steps?

Schemes and issuers, work with the processors to enable these benefits.  Accept greater assurances and risk based decisions from processors.  A higher payment acceptance rate and lower fraud, all with minimal effort clearly benefits everyone.

To the EU, EBA and those writing PSD2, engage in the discussion and realise there are ways to meet your intent without adversely affecting the payments ecosystem.  Intelligence and innovation can provide ‘strong authentication’ without the need for any extra complexity in the payments process. We can in fact reduce the friction while improving the security.


Everyone involved in the payments ecosystem wants pretty much the same things, let’s be innovative and achieve these in ways that improve the experience for merchants and consumers.  This ultimately improves things for everyone!


Feel free to contact me via this blog, or find me on LinkedIn to discuss further and if you’d like to know some more details around how this really can work in practice.