ISF Congress Post 5: Expect the worst; Operational risk quantification process

Expect the worst case;

Approach for quantifying operational risks – special focus on cyber security risks

Presentation by Hanno Lenz for the ERGO Insurance group

ERGO splits risk management into three categories / lines of defence;

  • Risk Taker (Owner) – Business line
  • Risk Controller – Risk Management
  • Independent Assurance – Internal Audit

 

This is then further split into lines of business and risk categories (Strategic Risks, Market and credit risks, operational risks, liquidity risks, repetitional risks).

This presentation had some excellent graphics highlighting their risk process, how they move from threats to risks, how to assess the probability, impacts and then the actual risk.  This process is outlined below.  Click on the images for a larger view.

 

They created this Security and Continuity Risk management model;

Screen Shot 2013-11-04 at 09.38.08

This model for working through from threats to the actual risks;

Screen Shot 2013-11-04 at 09.39.15

The process they follow from threat to actual business risk and impact is outlined in the diagrams below.

Assessing the Probability of the threat occurring;

Screen Shot 2013-11-04 at 09.42.44

Assessing the Impact should the risk occur;

Screen Shot 2013-11-04 at 09.44.35

And finally, working out the actual risk by combining the probability with the impact;

Screen Shot 2013-11-04 at 09.46.09

 

I think this provides a very good, easy to understand overview of a relatively simple and workable risk assessment process.

Remember in order to make any risk assessment process success and for the results to be worthwhile you need to ensure the input data is as accurate as possible, and also that the analysis is performed by people with the relevant expertise.

For the inputs, ensure you consult with the business streams, have an in depth understanding of the organisation, it’s IT structure, where the data and applications are, the number of employees, office locations etc.  Also ensure you have engaged with the BCM teams to understand recovery requirements and plans, recovery costs, degree of outsourcing etc.

For the outputs, as well as the IT security and BCM teams, ensure you have the right experts for creating realistic examples, creating actual security situations, estimating the costs of the risk should it occur, and also experts in mathematical modelling so that the results are modelled correctly and not just estimates.

K

ISF congress post 4: Keynote session: The view from the advisory board

This was a panel discussion session so flowed around quite a bit, and wasn’t always focussed.  The below covers most of the main points that were discussed;

Focus no longer on China.

Focus more on what enterprises can do to protect data and work with their customers securely.

Snowden affair, and global information security / assurance – living in a globally surveyed world.

 

I’ve been following the Snowden debacle in the news;

  • Is this something we need to pay attention to?
  • Tell me three key actions we need to take.

 

  • US has the ‘right’ to monitor all network traffic that goes via it or US companies from ‘foreigners’.  Doesn’t sound to bad until you realise we are nearly all foreigners (around 97% of the global population isn’t American!).  This has huge ramifications.
  • Snowden affair – nearly all the leaks from this have been of ‘Top Secret’ classification, this hardly ever happens, most leaks are of much lower classification.
  • However – Remember, just because we are looking at the NSA, China has not gone away.  Remembering this is critical to your security posture.
  • Everything is stored forever!  Whether NSA or Google, or other email / search service, all your emails etc are likely stored forever, and probably in several places.
  • On the opposite side, many industries are rightly moving to more openness and sharing data with more people and the right people
  • Other nations likely better then the US at sharing the findings of their industrial espionage with national companies – French and Japanese apparently very good at sharing espionage data with companies based in those countries.  NSA surveillance may be pervasive, but questions about how much it shares.  Board members and CEOs need to be aware that this espionage is a reality.
  • Supply chain security is a key factor to consider.
  • Emerging economies have a huge security impact – what they are doing with us, and how we interact and integrate with them.
  • International treaties around how intelligence agencies work abroad around monitoring each other are needed and being worked on.  In democratic countries at least – no comment on what is happening in dictatorships such as Russia and China.
  • Outsourcing data to third parties for processing etc. has been going on for years such as through the use of mainframes.  Cloud services are not a new concept, however the accessibility of these services to many people and the accessibility of the data in them has been a dramatic change.
  • Encrypting data if you own the process end to end can ensure data is securely stored.  Doesn’t really help with processing in the cloud.
  • Who reads the full terms and conditions of the services they use?  How much security and privacy are we inadvertently giving up?
  • We must not confuse Security and Privacy – these are different things.

 

 

The internet is a global platform, do you think it will become more balkanised?

  • It was set up by the military, and now they want it back 😉
  • It is already there on many layers – who makes the kit it runs on? Which governments have access to the data or any controls over the data flows?
  • Governments ignored the internet for years, now they all want some control over it, and government agencies all want to monitor and spy on the data on the internet.
    • There is a ‘war’ around who controls the internet occurring right now.
  • The internet and technology are changing very fast, nations / governments are struggling to keep up.

 

 

Cloud – is it new or isn’t it?

  • Yes and no.
    • Concept of sharing compute resource and allowing users or companies access to compute resource they couldn’t otherwise afford is not new.
    • Concept of data being anywhere / everywhere, and access to cloud compute and storage is new and the game changer that cloud is advertised to be.
      • Creates many issues
        • Where is your data?
        • Who controls your data?
        • What about international interception / access laws and capabilities?
    • Cost and scale benefits driving use in many businesses
      • How do you best secure this use case?
      • How do you ensure only the right ‘stuff’ gets into the cloud?
      • Do you have the right policies in place?
      • Do you have the right knowledge and skill sets for secure cloud use?
      • Vet staff and people in key positions both in your business and the cloud provider.
      • Encrypt your data – this is true, but I have serious issues around this one based on what sort of processing is required – can Tokenisation or Homomorphic encryption be leveraged?  What other ways do you have to mitigate the risk of data being unencrypted for processing?
    • Cloud is an innovator – gives businesses more opportunities, and also gives us new area to learn to secure.
    • Be proactive – be ready for the cloud, go to the business rather than them coming to you.

K

ISF congress post 3: The state of Quantum computing…

The state of Quantum computing…

… And the future of InfoSec

Presentation by Konstantinos Karagiannis from NT andJuniper Networks

 

Enough Quantum Mechanics to get by;

  • Richard Feynman “I think I can safely say that no one understands quantum physics”
  • Unlike macro objects, quantum ones exhibit weird behaviours that make amazing things possible
  • Max Planck proposed electromagnetic energy only emitted in discrete bundles or “quanta”: E=hf
  • Planck’s constant (h) and derivatives (Planck unit) may prove important in future information theory (one ‘bit of information = one planck unit..)
  • Light – made of waves (Thomas Young) made of photons, not waves (Einstein), Geoffrey Ingram Taylor – wave interference patterns even with one photon at a time – Particle wave duality!
  • Superposition – if you observe the light, the superposition is destroyed and it appears to work as you would expect.
  • This concept of decoherence is key to QC.
  • Entanglement – the key “mystery” of QM, and important for QC.
    • Created by a quantum event, entangled particles share a quality in superposition such as spin up or down.
    • If you observe the spin of one particle, the spin of the other is immediately known even if it is the other side of the galaxy.
    • No this doesn’t break the cosmic speed of light as it is effectively just random information.
    • This does have real applications in QC and quantum cryptography
  • QCs must maintain coherence / superposition in hundreds of particles e.g. via
    • Quantum optics
    • single atom silicon
    • Large artificial quits
    • NMR

 

Qubits and how a quantum computer (QC) will impact some areas;

  • Qubit
    • can be zero, one, or a superposition of both (with probabilities of each)
    • To over simplify: Qubits can perform certain functions with a percentage of effort of a classical computer
  • Public Key crypto, e.g. RSA;
    • Relies on classical computer’s difficulty in cracking certain mathematical functions
    • QC – Shor’s Algorithm – QC can easily reveal the factors of large prime numbers.
      • Shor’s algorithm puts quits through mathematical paces where likely answers interfere constructively, unlikely ones destructively.
      • Classical computers can’t so this in a timely manner.
    • Imagine the impact of being the first country with PKI-slicing capabilities!!
  • Grover’s Algorithm;
    • For searching databases / data;
    • Traditional DB – N/2 searches for N entries
    • QC Root of N searches for N entries..

Scanning with Quantum AI

  • Vulnerability scanners need to run and compare results quickly – Grover’s algorithm
  • Quantum algorithms may advance artificial intelligence – more useful for scanning web apps than networks
  • Traditional top-down AI approach fails – bottom-up may be easier to do with Quantum parallelism

Quantum networking

  • Routing quantum data is tricky – when you observe the quit, you destroy the data
    • create photon pair – one to observe, one to route

Quantum Teleportation

  • Entanglement allow for teleportation of quantum state – look up ‘Alice and Bob’ quantum entanglement example.
  • Teleport state of algorithms for distributed computing

 

Where are we now?

D-Wave claim to have a 512-qubit QC (with 439 operational qubits)  – There is currently some scepticism around this)

  • Google and NASA have teamed up on acquiring a D-Wave second generation machine (512-qubit)
  • Created the Quantum Artificial Intelligence Lab
  • University of waterloo has an advanced QC department
  • Lockheed Martin also using and developing a D-Wave QC

 

Moore’s Law;

  • QCs are not better than classical computers at everything
  • QCs still inevitable – we are getting to the single-particle level on transistors
  • No more miniaturisation possible to keep Moore’s Law going

 

Staying relevant – Encryption;

  • Shor’s algorithm only proven to work on PK, grover’s may help with
  • Toshiba developing quantum network with polarised photons, these provide encrypted, tamper evident networks.
  • We must stay relevant, new world of research and development coming – everything from the basics to security tool programming
  • Threat modelling
    • If AI improves scanning, hackers will have much better ways of finding application flaws

Closing thought;

  • Feynman’s first proposed QC was a universal quantum simulator
  • Seth lloyd showed a QC can perfectly simulate any quantum system in the universe
  • Turns out universe is a giant, 13.7-billion year old quantum computer
  • What will we be hacking one day?

This was a very thought proving and fast paced talk.  The above notes are very high level, but cover the main points of the talk and can be used to aid searches for more in depth reading.  This presentation really highlighted to me I need to read up more on this stuff.

We are not there yet, but Quantum Computers are coming and they will have huge ramifications for pretty much all areas of computing.  From a security standpoint, we will likely need a full overhaul of cryptography and threat modelling, along with application and system vulnerability scanning.  Of course not forgetting a whole new class of computers and networks to understand and secure!

Interesting times ahead, and I highly recommend further reading on this topic.

K

 

ISF congress post 2: Communicating information security value to the business

Communicating information security value to the business using words and pictures.

Presentation by Steve Jump from Telkom SA SOC ltd.

I have high hopes for the usefulness of this talk as we all seem great at explaining and discussing security issues with other security and technical people, but fairly terrible at getting the board and other business people to understand the issues and importance of remediating them!

 

Highlighted at the start that this is a work in progress, but already proving useful.

If you are trying to obtain budget for upcoming initiatives  you need to get the board on board and ensure they understand the risks from a business standpoint.

  • Why business gets turned off by security
    • Too much shouting about risks, creating policies and standards, more talking about risks – who is looking at your data (criminals, governments, hacktivists), where is your data, more standards and policies
  • What the business actually wants (and needs) to talk about
    • What do these threats mean to my business?
    • Why should I worry?
    • How does this affect the bottom line?
    • What happens if I ignore you? (e.g. is the cost of doing nothing lower than the cost if fixing the issue?)
    • Can you put a value on that?
    • If I do ignore you, will anyone notice?
  • Its all in the words we use;
    • Business Impact Taxonomy!

 

Regulatory

  • Non compliance to legislation, risk of fines, prosecution etc.

Fraud

  • Illegal access to information leading to fraud, Identity theft, mis-representation, corrupt practices, banking and card fraud etc.

Theft

  • Theft of information or revenue, direct theft of assets

Service Availability

  • Service denial or interference

Business Agility

  • Prevention of business growth and reduced opportunity for profit due to reduced agility of systems and increased need to deliver custom protection of solutions.

Reputation

  • Loss of business reputation resulting from information loss or device interruption resulting in loss of credibility with customers and investors.

 

So that’s all the jargon sorted out?

Think of creating threat cubes – they have a LOT more words than this and are technical.

So how do we bridge the gab between the jargon and output from threat analysis etc. to a simple taxonomy the business can understand, relate to and use in budget and planning discussions?

 

Add pictures!

One for each of the six words in the simple taxonomy;

 

Warning triangle – Regulatory

Credit card – Fraud (may need to be different for you if you work in a PCI environment as this may get confused with the regulatory one)

Money Bag – Theft

Road block sign – Service availability (things with this could impact our ability to do business)

Rocket ship – Business agility – faster, innovative

Happy / sad masks – Reputation

 

So the taxonomy now has words and images for each item.

So when you create a threat cube or other form of threat analysis you can then relate each item on the list back to one or more of the taxonomy words and images – images can be added to aid understanding.  For reporting, each should be mapped to the main area it impacts.

 

How this works in practice;

  • Formal Information Security Risk assessment process
    • Asess solution, change product or service against technical business threat models
    • Identify key threats, recommend mitigations and evaluate impact of residual threats
  • Summarise business impact in business terms
    • Use six key business impact areas to describe and prioritise impact areas
    • Use business impact icons in formal / technical risk assessment (in body text and headings) to ensure continuity
  • Technical risk assessment and Business risk owners still work in different areas
    • Icons bridge experience and jargon barriers
    • Technical designers and security specialists understand business drivers
    • Business owners understand where technical short cuts will affect overall risk model

 

 

The chosen icons work on mac and windows as standard keyboard short cuts so should work across most businesses using Word / PDFs / spreadsheets etc.

For larger threats, use more icons – so one, two, or three icons depending on low, medium or high issues size.

For reference, the symbols used to represent the 6 areas;

Fraud 1F4B3 <Alt-X>

Regulatory 26A0

Theft 1F4B0

Service Availability 1F6A7

Business Agility 1F680

Business Reputation 1F3AD

If Unicode character is used (Win7/8 – type code, press Alt-x) it will display automatically if font is Segoe.

UI Symbol on Windows (Word/Excel/PowerPoint/Outlook) or as emoji font on OS X, iOS, Android.

 

It will be interesting to test this method out at work to see if it helps get engagement from the board and wider business.  This definitely seems like a good idea, and anything that will help engage and lead to greater understanding of security issues has to be worth a try1

It would be great to hear from anyone who s trying this method, or a similar one in their business.

K

ISF congress 2013 Post 1: Defence evasion modelling – Fault correlation and bypassing the cyber kill chain

Well I am at the ISF (Information Security Forum) annual congress for the next couple of days.  As usual I’ll blog notes and some comments from the talks I listen to, and where possible share them ‘live’ and as is.

Presentation by Stefan Frei and Francisco Artes from NSS Labs.

 

The risk is much larger then people thought.  It is more like the 800 pound ‘cyber gorilla’ than the chimpanzee.. And to make things worse it is a whole field of these ‘cyber gorillas’.

 

It’s not just about digital data theft;

  • Destruction / alteration of digital assets
  • Interruption to applications, systems and customer resources
  • Single points of data
  • AV vendors only focus on defending mass market applications
  • Geo location – access from anywhere for users and hackers

 

Do we understand our defences?

  • Network – Firewall, IPS (Intrusion Prevention System), WAF (Web Application Firewall), NGFW (Next Generation Firewall), Anti APT (Advanced Persistent Threat) etc. etc.
  • Host – AV (Anti Virus), Host FW, Host IPS, Host zero day, application controls etc. etc.
  • Different vendors often used due to perception that 2 vendors

 

What about indirect attacks, such as browser and application based?

 

How effective are your defences?

 

How do we maintain the balance between security and usability?

How do we assess the security of our solutions?

How do we report on this with metrics that are meaningful to the board?

 

Threat modelling can be a useful tool here.

 

Live modelling solutions (such as those done by NSS labs) can be used to model differnect tools from different vendors in an environment broadly similar to yours; (NSS example)

 

  • Pick your applications and operating systems
  • Pick your broad network design
  • Pick the security solutions and where they are placed.

 

Devices each tested with >2000 exploits, thus when you choose different devices you can see where the exploits would be caught or missed, so for example you could layer brand X NGFW, with brand Y IPS, and brand Z AV.  The ‘live’ threat model would then map the exploits that each device missed, so you can see if any would pass all the layers in your security.

All tests were done with the devices tuned as per manufacturers recommendations.

  • For IPS the vendors had experts tune them, this lead to a 60-85% increase in IPS performance.  This point is very interesting outside to this talk – IPS devices MUST be tuned and maintained for them to deliver value and protection.  Do you regularly tune and maintain IDS / IPS devices in your environment?

 

Report / live threat modelling also differentiates between automated attacks vs. hand crafted ones.  This highlights how many attacks could relatively easily be launched by anyone with basic skills in free tools such as Metasploit.  This raises the question why security tool vendors can’t at least download exploit tool kits and their updates to ensure their tools can at least prevent the available pre-packaged attacks!

 

This is definitely a useful tool, and whether NSS or similar I can recommend you undertake some detailed threat modelling of your environment.  This type of service allows you to perform much more ‘real’ technical threat modelling rather than just doing theoretical attack scenarios which is as far as most threat modelling exercises seem to go.

 

What is the threat environment?

Many experts writing tools and exploits.

A huge number of people with limited skills utilising free and paid for tools created by the exports – this increases the threat exponentially – anyone can try the free tools, anyone with even limited funds can purchase the paid for tools (often around $250).

 

The maturing threat landscape;

there is now a thriving market for underground hacking / attack tools.  This has matured and now offers regularly patched software with patching cycles, new exploits regularly added, and even full support with email and sometimes phone based support desks.

The vendors of these hacking tools even offer guarantees around how long exploits will work for and evade security tools.

These are often referred to as Crimeware Kits.

 

In the tests by NSS labs, no device detected all exploits available in these tools, or in the free tools.

 

This is the continuing problem for businesses and the security industry – they are always playing catch up and creating tools / solutions to deal with known threats, rarely the unknown threats.

 

Another interesting finding was in a recent test of NGFWs where combinations of two vendors were used in serial, no one pair prevented all exploits tested.  However careful and planned pairing does improve security.  However this needs to be tested and planned, choosing two vendors at random is the wrong way to do this.  How many businesses currently have separate FW or NGFW vendors at different layers of the network?  How many of these actually researched the exploits that get through these and chose the solutions for the maximum protection vs. choosing two different vendors without doing this research?

 

Security vendors will always be playing catch up, however threat modelling can help ensure you choose the best ones for your environment.

Threat modelling will also help choose the best investments to improve security.

As an example a business who worked with NSS was about to invest >$300M on NGFWs across their environment.  The threat modelling highlighted that this wouldn’t add a huge amount of security due to a Java issue on all their sites and machines.  They could invest (and did) more like £3M on migrating the app to HTML5 and removing Java from their environment.  This created a much more secure environment for a mush smaller investment.

 

Threat modelling can also include geo-loaction and which vendors work best in which locations as well as just looking at the technologies.

 

Final point was a reminder that as no tools will prevent everything, see must assume we have been ‘owned’ (breached) and act accordingly.  This must not be an exception process, we must search for and respond to breaches as part of our security business as usual process.

 

If you are not performing live threat modelling, I’d highly recommend you start as this is a great way of assessing your current security posture, and also very useful for planning you next security investments to ensure they provide the greatest value and also measurably improve your security posture.

Overall, this was a very informative talk that while demonstrating their product / service managed the stay fairly clear of too much vendor speak and promotion while still highlighting the clear benefits of ‘live threat modelling.

K

The four slide risk presentation to the board

Recent Gartner survey of security / risk professionals showed that;

45% think risk management data influences decisions at the board level

However

31% think that risk management data does not influence decisions at board level

15% think thew board do not understand risk management data

6% said it wasn’t even reported at a board level

and 4% didn’t know..

Personally I would have liked to delve into more depth on these questions

For example;

  • for those who think it influences board decisions – how, why and does it have enough influence
  • for those who think it doesn’t, why not and what could be done to improve things

 

What are the roles of the board and the CISO in enterprise risk management?

  • Board – balance Risk Indicators with Risk Appetite
    • ensure the executives understand what the risks are and are comfortable they fit into the overall rail appetite (e.g. how risk adverse they are)
  • CISO – moving from the traditional of Asset performance to Business performance

When reporting to the Board, how can you relate risks to business objectives that most concern the board?

Brief four slide presentation;

  • Slide 1:  List the half dozen most important enterprise strategies and objectives
  • Slide 2: Name the IT risks that have a potentially significant impact on the most important enterprise strategies and initiatives
  • Slide 3: Describe risk management initiatives
  • Slide 4: Wrap it up!

 Details / examples;

Slide 1: Enterprise Strategy Objectives

  • Acquisitions in emerging markets, new product development, customer retention, migration projects.
  • Guiding principle – Business objectives are IT objectives.  
    • Highlight that your security objectives are aligned with the business strategy and goals.

Slide 2: IT Risks

  • Acquisition Strategy
    • Acquired entities BC/Dr strategy
    • Acquired entities controls vs. our regulatory environment
    • Replacing / merging acquired systems with corporate systems
  • New product development
    • Application development security – SDLC – compliance
    • Infrastructure to support products in emerging markets
  • Customer retention
    • Customer experience with focus on acquired entities
    • Privacy
    • Social Media
    • Reputation

Slide 3: Risk Management Initiatives

  • Acquisition Strategy
    • Systems and controls analysis as part of M&A due diligence
    • Responsive, rapid IT on-boarding
    • Vendor consolidation
  • Product Development
    • QA program for application development including Six Sigma, ISO 9000 and ISO/IEC 27001
    • IT product development role specifically working to minimise risks in emerging markets, including product localisation – reduces time to market
  • Customer retention
    • CRM and SFA upgrades at acquired entities
    • Privacy management
    • Advanced analytics
    • Guiding principle – IT risks are business risks

Slide 4: Wrap it up

  • With current and proposed risk management initiatives there are no material or significant risks anticipated
  • IT is leading initiatives to manage risks to business objectives and other legal and regulatory risks – coordinating with departments across the business
  • Next steps include budget approval for the major initiatives
  • Details on risk and control assessments are in the board package
  • Thank you for your  support

Recommendations;

When communicating directly with the board, focus on:

  • What enterprise objectives and strategies matter most?
  • What’s the potential impact of IT risk on those things?
  • What are the current and proposed approaches to managing these risks?
  • What are the next steps?

In short, keep it simple and relevant to the concerns of the board.  Avoid technical jargon and focus on business goals and outcomes 🙂

K

Threat intelligence services – Why, What and Who

This was another Gartner talk covering the threat intelligence landscape, what you can expect, and things to consider.

Where did that come from?!

Important concept: “Threat”; 

  • A threat exploits a vulnerability resulting in an incident
    • Threat – you can’t control this, you can only be well informed and plan for it’s arrival
    • Vulnerability – you can control and understand these – secure coding, defiance in depth, vulnerability databases etc.
    • Incident – you want to avoid this!!

The problem is getting the Visibility…

  • The bad guys follow the same lifecycle that we do..
    • They talk and research – planning – perhaps up to a year or more
    • They customise attacks – build
    • They attack – run

Without threat intelligence your view looks like;

  • Ignorance (they are researching)
  • Ignorance (they are planning)
  • Hacked (they are running their attack)

Understanding upcoming threats allows you to match defences and mitigations required to your strategic planning cycle.  To do this we need good information on what is coming up, and what the bad guys are discussing for the future.

 Important concept: “Intelligence”

  • Goes beyond the obvious, trivial, or self evident:
    • developed by correlating and analysing multiple data sources / points
  • Includes a range of information, for example:
    • Goals of the threat actor
    • Characteristics of the threat, and potential organisational outcomes if it is successfully executed
    • Indicators and defences
    • Life expectancy of the threat
    • Reliability of the information
    • Use it to:
      • Avoid the threat
      • Diagnose an incident
      • Support decisions on how to invest in security (strategic planning)

Reliability and planning horizon are key considerations;

  • Network traffic feeds – automated information feeds – very reliable, but not real intelligence – good for immediate issues, not for planning.  Inexpensive
  • Operational intelligence – combination of automated and human, e.g. malware analysis, more intelligent that above, good for immediate planning, reasonably reliable (for short term).  still relatively inexpensive.
  • Strategic intelligence – Can be very tailored to your organisation, great deal of human interaction, custom made research, some human judgement.  Reasonably reliable, but as planning goes further out obviously reliability lowers as criminals can change plans.  Expensive, but great for strategic planning especially if you are in a high risk industry or organisation.
  • Snake oil – no one can predict 3-5 years out with certainty, so don’t believe anyone who says they can..

Recommendations;

  • Use dedicated services to plan for long term strategies, and ensure you are concerned about the right threats.
    • It can take up to two years to be ready for an emerging threat.
  • Plan – How will you use the service?  How will it be consumed? Who will consume it?
  • Consider whether you need just the threat intelligence, or adjacent services as well.
  • Before using, engage heavily with the vendor;
    • How flexible are they to your needs?
    • Will they go outside of the contract in an emergency or to assist you?
    • How well can you work with them – need a good, trusted and close working relationship with them.

 

If you are considering a threat intelligence service, this talk raises come great points to consider.  For me, they key point is how well you can work with them.  For these service to be successful you need to work very collaboratively together and they need to have a deep understanding of your specific business and concerns as well as just the industry sector.  Another recommended talk.

K