ISF congress post 3: The state of Quantum computing…

The state of Quantum computing…

… And the future of InfoSec

Presentation by Konstantinos Karagiannis from NT andJuniper Networks

 

Enough Quantum Mechanics to get by;

  • Richard Feynman “I think I can safely say that no one understands quantum physics”
  • Unlike macro objects, quantum ones exhibit weird behaviours that make amazing things possible
  • Max Planck proposed electromagnetic energy only emitted in discrete bundles or “quanta”: E=hf
  • Planck’s constant (h) and derivatives (Planck unit) may prove important in future information theory (one ‘bit of information = one planck unit..)
  • Light – made of waves (Thomas Young) made of photons, not waves (Einstein), Geoffrey Ingram Taylor – wave interference patterns even with one photon at a time – Particle wave duality!
  • Superposition – if you observe the light, the superposition is destroyed and it appears to work as you would expect.
  • This concept of decoherence is key to QC.
  • Entanglement – the key “mystery” of QM, and important for QC.
    • Created by a quantum event, entangled particles share a quality in superposition such as spin up or down.
    • If you observe the spin of one particle, the spin of the other is immediately known even if it is the other side of the galaxy.
    • No this doesn’t break the cosmic speed of light as it is effectively just random information.
    • This does have real applications in QC and quantum cryptography
  • QCs must maintain coherence / superposition in hundreds of particles e.g. via
    • Quantum optics
    • single atom silicon
    • Large artificial quits
    • NMR

 

Qubits and how a quantum computer (QC) will impact some areas;

  • Qubit
    • can be zero, one, or a superposition of both (with probabilities of each)
    • To over simplify: Qubits can perform certain functions with a percentage of effort of a classical computer
  • Public Key crypto, e.g. RSA;
    • Relies on classical computer’s difficulty in cracking certain mathematical functions
    • QC – Shor’s Algorithm – QC can easily reveal the factors of large prime numbers.
      • Shor’s algorithm puts quits through mathematical paces where likely answers interfere constructively, unlikely ones destructively.
      • Classical computers can’t so this in a timely manner.
    • Imagine the impact of being the first country with PKI-slicing capabilities!!
  • Grover’s Algorithm;
    • For searching databases / data;
    • Traditional DB – N/2 searches for N entries
    • QC Root of N searches for N entries..

Scanning with Quantum AI

  • Vulnerability scanners need to run and compare results quickly – Grover’s algorithm
  • Quantum algorithms may advance artificial intelligence – more useful for scanning web apps than networks
  • Traditional top-down AI approach fails – bottom-up may be easier to do with Quantum parallelism

Quantum networking

  • Routing quantum data is tricky – when you observe the quit, you destroy the data
    • create photon pair – one to observe, one to route

Quantum Teleportation

  • Entanglement allow for teleportation of quantum state – look up ‘Alice and Bob’ quantum entanglement example.
  • Teleport state of algorithms for distributed computing

 

Where are we now?

D-Wave claim to have a 512-qubit QC (with 439 operational qubits)  – There is currently some scepticism around this)

  • Google and NASA have teamed up on acquiring a D-Wave second generation machine (512-qubit)
  • Created the Quantum Artificial Intelligence Lab
  • University of waterloo has an advanced QC department
  • Lockheed Martin also using and developing a D-Wave QC

 

Moore’s Law;

  • QCs are not better than classical computers at everything
  • QCs still inevitable – we are getting to the single-particle level on transistors
  • No more miniaturisation possible to keep Moore’s Law going

 

Staying relevant – Encryption;

  • Shor’s algorithm only proven to work on PK, grover’s may help with
  • Toshiba developing quantum network with polarised photons, these provide encrypted, tamper evident networks.
  • We must stay relevant, new world of research and development coming – everything from the basics to security tool programming
  • Threat modelling
    • If AI improves scanning, hackers will have much better ways of finding application flaws

Closing thought;

  • Feynman’s first proposed QC was a universal quantum simulator
  • Seth lloyd showed a QC can perfectly simulate any quantum system in the universe
  • Turns out universe is a giant, 13.7-billion year old quantum computer
  • What will we be hacking one day?

This was a very thought proving and fast paced talk.  The above notes are very high level, but cover the main points of the talk and can be used to aid searches for more in depth reading.  This presentation really highlighted to me I need to read up more on this stuff.

We are not there yet, but Quantum Computers are coming and they will have huge ramifications for pretty much all areas of computing.  From a security standpoint, we will likely need a full overhaul of cryptography and threat modelling, along with application and system vulnerability scanning.  Of course not forgetting a whole new class of computers and networks to understand and secure!

Interesting times ahead, and I highly recommend further reading on this topic.

K

 

ISF congress post 2: Communicating information security value to the business

Communicating information security value to the business using words and pictures.

Presentation by Steve Jump from Telkom SA SOC ltd.

I have high hopes for the usefulness of this talk as we all seem great at explaining and discussing security issues with other security and technical people, but fairly terrible at getting the board and other business people to understand the issues and importance of remediating them!

 

Highlighted at the start that this is a work in progress, but already proving useful.

If you are trying to obtain budget for upcoming initiatives  you need to get the board on board and ensure they understand the risks from a business standpoint.

  • Why business gets turned off by security
    • Too much shouting about risks, creating policies and standards, more talking about risks – who is looking at your data (criminals, governments, hacktivists), where is your data, more standards and policies
  • What the business actually wants (and needs) to talk about
    • What do these threats mean to my business?
    • Why should I worry?
    • How does this affect the bottom line?
    • What happens if I ignore you? (e.g. is the cost of doing nothing lower than the cost if fixing the issue?)
    • Can you put a value on that?
    • If I do ignore you, will anyone notice?
  • Its all in the words we use;
    • Business Impact Taxonomy!

 

Regulatory

  • Non compliance to legislation, risk of fines, prosecution etc.

Fraud

  • Illegal access to information leading to fraud, Identity theft, mis-representation, corrupt practices, banking and card fraud etc.

Theft

  • Theft of information or revenue, direct theft of assets

Service Availability

  • Service denial or interference

Business Agility

  • Prevention of business growth and reduced opportunity for profit due to reduced agility of systems and increased need to deliver custom protection of solutions.

Reputation

  • Loss of business reputation resulting from information loss or device interruption resulting in loss of credibility with customers and investors.

 

So that’s all the jargon sorted out?

Think of creating threat cubes – they have a LOT more words than this and are technical.

So how do we bridge the gab between the jargon and output from threat analysis etc. to a simple taxonomy the business can understand, relate to and use in budget and planning discussions?

 

Add pictures!

One for each of the six words in the simple taxonomy;

 

Warning triangle – Regulatory

Credit card – Fraud (may need to be different for you if you work in a PCI environment as this may get confused with the regulatory one)

Money Bag – Theft

Road block sign – Service availability (things with this could impact our ability to do business)

Rocket ship – Business agility – faster, innovative

Happy / sad masks – Reputation

 

So the taxonomy now has words and images for each item.

So when you create a threat cube or other form of threat analysis you can then relate each item on the list back to one or more of the taxonomy words and images – images can be added to aid understanding.  For reporting, each should be mapped to the main area it impacts.

 

How this works in practice;

  • Formal Information Security Risk assessment process
    • Asess solution, change product or service against technical business threat models
    • Identify key threats, recommend mitigations and evaluate impact of residual threats
  • Summarise business impact in business terms
    • Use six key business impact areas to describe and prioritise impact areas
    • Use business impact icons in formal / technical risk assessment (in body text and headings) to ensure continuity
  • Technical risk assessment and Business risk owners still work in different areas
    • Icons bridge experience and jargon barriers
    • Technical designers and security specialists understand business drivers
    • Business owners understand where technical short cuts will affect overall risk model

 

 

The chosen icons work on mac and windows as standard keyboard short cuts so should work across most businesses using Word / PDFs / spreadsheets etc.

For larger threats, use more icons – so one, two, or three icons depending on low, medium or high issues size.

For reference, the symbols used to represent the 6 areas;

Fraud 1F4B3 <Alt-X>

Regulatory 26A0

Theft 1F4B0

Service Availability 1F6A7

Business Agility 1F680

Business Reputation 1F3AD

If Unicode character is used (Win7/8 – type code, press Alt-x) it will display automatically if font is Segoe.

UI Symbol on Windows (Word/Excel/PowerPoint/Outlook) or as emoji font on OS X, iOS, Android.

 

It will be interesting to test this method out at work to see if it helps get engagement from the board and wider business.  This definitely seems like a good idea, and anything that will help engage and lead to greater understanding of security issues has to be worth a try1

It would be great to hear from anyone who s trying this method, or a similar one in their business.

K

ISF congress 2013 Post 1: Defence evasion modelling – Fault correlation and bypassing the cyber kill chain

Well I am at the ISF (Information Security Forum) annual congress for the next couple of days.  As usual I’ll blog notes and some comments from the talks I listen to, and where possible share them ‘live’ and as is.

Presentation by Stefan Frei and Francisco Artes from NSS Labs.

 

The risk is much larger then people thought.  It is more like the 800 pound ‘cyber gorilla’ than the chimpanzee.. And to make things worse it is a whole field of these ‘cyber gorillas’.

 

It’s not just about digital data theft;

  • Destruction / alteration of digital assets
  • Interruption to applications, systems and customer resources
  • Single points of data
  • AV vendors only focus on defending mass market applications
  • Geo location – access from anywhere for users and hackers

 

Do we understand our defences?

  • Network – Firewall, IPS (Intrusion Prevention System), WAF (Web Application Firewall), NGFW (Next Generation Firewall), Anti APT (Advanced Persistent Threat) etc. etc.
  • Host – AV (Anti Virus), Host FW, Host IPS, Host zero day, application controls etc. etc.
  • Different vendors often used due to perception that 2 vendors

 

What about indirect attacks, such as browser and application based?

 

How effective are your defences?

 

How do we maintain the balance between security and usability?

How do we assess the security of our solutions?

How do we report on this with metrics that are meaningful to the board?

 

Threat modelling can be a useful tool here.

 

Live modelling solutions (such as those done by NSS labs) can be used to model differnect tools from different vendors in an environment broadly similar to yours; (NSS example)

 

  • Pick your applications and operating systems
  • Pick your broad network design
  • Pick the security solutions and where they are placed.

 

Devices each tested with >2000 exploits, thus when you choose different devices you can see where the exploits would be caught or missed, so for example you could layer brand X NGFW, with brand Y IPS, and brand Z AV.  The ‘live’ threat model would then map the exploits that each device missed, so you can see if any would pass all the layers in your security.

All tests were done with the devices tuned as per manufacturers recommendations.

  • For IPS the vendors had experts tune them, this lead to a 60-85% increase in IPS performance.  This point is very interesting outside to this talk – IPS devices MUST be tuned and maintained for them to deliver value and protection.  Do you regularly tune and maintain IDS / IPS devices in your environment?

 

Report / live threat modelling also differentiates between automated attacks vs. hand crafted ones.  This highlights how many attacks could relatively easily be launched by anyone with basic skills in free tools such as Metasploit.  This raises the question why security tool vendors can’t at least download exploit tool kits and their updates to ensure their tools can at least prevent the available pre-packaged attacks!

 

This is definitely a useful tool, and whether NSS or similar I can recommend you undertake some detailed threat modelling of your environment.  This type of service allows you to perform much more ‘real’ technical threat modelling rather than just doing theoretical attack scenarios which is as far as most threat modelling exercises seem to go.

 

What is the threat environment?

Many experts writing tools and exploits.

A huge number of people with limited skills utilising free and paid for tools created by the exports – this increases the threat exponentially – anyone can try the free tools, anyone with even limited funds can purchase the paid for tools (often around $250).

 

The maturing threat landscape;

there is now a thriving market for underground hacking / attack tools.  This has matured and now offers regularly patched software with patching cycles, new exploits regularly added, and even full support with email and sometimes phone based support desks.

The vendors of these hacking tools even offer guarantees around how long exploits will work for and evade security tools.

These are often referred to as Crimeware Kits.

 

In the tests by NSS labs, no device detected all exploits available in these tools, or in the free tools.

 

This is the continuing problem for businesses and the security industry – they are always playing catch up and creating tools / solutions to deal with known threats, rarely the unknown threats.

 

Another interesting finding was in a recent test of NGFWs where combinations of two vendors were used in serial, no one pair prevented all exploits tested.  However careful and planned pairing does improve security.  However this needs to be tested and planned, choosing two vendors at random is the wrong way to do this.  How many businesses currently have separate FW or NGFW vendors at different layers of the network?  How many of these actually researched the exploits that get through these and chose the solutions for the maximum protection vs. choosing two different vendors without doing this research?

 

Security vendors will always be playing catch up, however threat modelling can help ensure you choose the best ones for your environment.

Threat modelling will also help choose the best investments to improve security.

As an example a business who worked with NSS was about to invest >$300M on NGFWs across their environment.  The threat modelling highlighted that this wouldn’t add a huge amount of security due to a Java issue on all their sites and machines.  They could invest (and did) more like £3M on migrating the app to HTML5 and removing Java from their environment.  This created a much more secure environment for a mush smaller investment.

 

Threat modelling can also include geo-loaction and which vendors work best in which locations as well as just looking at the technologies.

 

Final point was a reminder that as no tools will prevent everything, see must assume we have been ‘owned’ (breached) and act accordingly.  This must not be an exception process, we must search for and respond to breaches as part of our security business as usual process.

 

If you are not performing live threat modelling, I’d highly recommend you start as this is a great way of assessing your current security posture, and also very useful for planning you next security investments to ensure they provide the greatest value and also measurably improve your security posture.

Overall, this was a very informative talk that while demonstrating their product / service managed the stay fairly clear of too much vendor speak and promotion while still highlighting the clear benefits of ‘live threat modelling.

K

The four slide risk presentation to the board

Recent Gartner survey of security / risk professionals showed that;

45% think risk management data influences decisions at the board level

However

31% think that risk management data does not influence decisions at board level

15% think thew board do not understand risk management data

6% said it wasn’t even reported at a board level

and 4% didn’t know..

Personally I would have liked to delve into more depth on these questions

For example;

  • for those who think it influences board decisions – how, why and does it have enough influence
  • for those who think it doesn’t, why not and what could be done to improve things

 

What are the roles of the board and the CISO in enterprise risk management?

  • Board – balance Risk Indicators with Risk Appetite
    • ensure the executives understand what the risks are and are comfortable they fit into the overall rail appetite (e.g. how risk adverse they are)
  • CISO – moving from the traditional of Asset performance to Business performance

When reporting to the Board, how can you relate risks to business objectives that most concern the board?

Brief four slide presentation;

  • Slide 1:  List the half dozen most important enterprise strategies and objectives
  • Slide 2: Name the IT risks that have a potentially significant impact on the most important enterprise strategies and initiatives
  • Slide 3: Describe risk management initiatives
  • Slide 4: Wrap it up!

 Details / examples;

Slide 1: Enterprise Strategy Objectives

  • Acquisitions in emerging markets, new product development, customer retention, migration projects.
  • Guiding principle – Business objectives are IT objectives.  
    • Highlight that your security objectives are aligned with the business strategy and goals.

Slide 2: IT Risks

  • Acquisition Strategy
    • Acquired entities BC/Dr strategy
    • Acquired entities controls vs. our regulatory environment
    • Replacing / merging acquired systems with corporate systems
  • New product development
    • Application development security – SDLC – compliance
    • Infrastructure to support products in emerging markets
  • Customer retention
    • Customer experience with focus on acquired entities
    • Privacy
    • Social Media
    • Reputation

Slide 3: Risk Management Initiatives

  • Acquisition Strategy
    • Systems and controls analysis as part of M&A due diligence
    • Responsive, rapid IT on-boarding
    • Vendor consolidation
  • Product Development
    • QA program for application development including Six Sigma, ISO 9000 and ISO/IEC 27001
    • IT product development role specifically working to minimise risks in emerging markets, including product localisation – reduces time to market
  • Customer retention
    • CRM and SFA upgrades at acquired entities
    • Privacy management
    • Advanced analytics
    • Guiding principle – IT risks are business risks

Slide 4: Wrap it up

  • With current and proposed risk management initiatives there are no material or significant risks anticipated
  • IT is leading initiatives to manage risks to business objectives and other legal and regulatory risks – coordinating with departments across the business
  • Next steps include budget approval for the major initiatives
  • Details on risk and control assessments are in the board package
  • Thank you for your  support

Recommendations;

When communicating directly with the board, focus on:

  • What enterprise objectives and strategies matter most?
  • What’s the potential impact of IT risk on those things?
  • What are the current and proposed approaches to managing these risks?
  • What are the next steps?

In short, keep it simple and relevant to the concerns of the board.  Avoid technical jargon and focus on business goals and outcomes 🙂

K

Threat intelligence services – Why, What and Who

This was another Gartner talk covering the threat intelligence landscape, what you can expect, and things to consider.

Where did that come from?!

Important concept: “Threat”; 

  • A threat exploits a vulnerability resulting in an incident
    • Threat – you can’t control this, you can only be well informed and plan for it’s arrival
    • Vulnerability – you can control and understand these – secure coding, defiance in depth, vulnerability databases etc.
    • Incident – you want to avoid this!!

The problem is getting the Visibility…

  • The bad guys follow the same lifecycle that we do..
    • They talk and research – planning – perhaps up to a year or more
    • They customise attacks – build
    • They attack – run

Without threat intelligence your view looks like;

  • Ignorance (they are researching)
  • Ignorance (they are planning)
  • Hacked (they are running their attack)

Understanding upcoming threats allows you to match defences and mitigations required to your strategic planning cycle.  To do this we need good information on what is coming up, and what the bad guys are discussing for the future.

 Important concept: “Intelligence”

  • Goes beyond the obvious, trivial, or self evident:
    • developed by correlating and analysing multiple data sources / points
  • Includes a range of information, for example:
    • Goals of the threat actor
    • Characteristics of the threat, and potential organisational outcomes if it is successfully executed
    • Indicators and defences
    • Life expectancy of the threat
    • Reliability of the information
    • Use it to:
      • Avoid the threat
      • Diagnose an incident
      • Support decisions on how to invest in security (strategic planning)

Reliability and planning horizon are key considerations;

  • Network traffic feeds – automated information feeds – very reliable, but not real intelligence – good for immediate issues, not for planning.  Inexpensive
  • Operational intelligence – combination of automated and human, e.g. malware analysis, more intelligent that above, good for immediate planning, reasonably reliable (for short term).  still relatively inexpensive.
  • Strategic intelligence – Can be very tailored to your organisation, great deal of human interaction, custom made research, some human judgement.  Reasonably reliable, but as planning goes further out obviously reliability lowers as criminals can change plans.  Expensive, but great for strategic planning especially if you are in a high risk industry or organisation.
  • Snake oil – no one can predict 3-5 years out with certainty, so don’t believe anyone who says they can..

Recommendations;

  • Use dedicated services to plan for long term strategies, and ensure you are concerned about the right threats.
    • It can take up to two years to be ready for an emerging threat.
  • Plan – How will you use the service?  How will it be consumed? Who will consume it?
  • Consider whether you need just the threat intelligence, or adjacent services as well.
  • Before using, engage heavily with the vendor;
    • How flexible are they to your needs?
    • Will they go outside of the contract in an emergency or to assist you?
    • How well can you work with them – need a good, trusted and close working relationship with them.

 

If you are considering a threat intelligence service, this talk raises come great points to consider.  For me, they key point is how well you can work with them.  For these service to be successful you need to work very collaboratively together and they need to have a deep understanding of your specific business and concerns as well as just the industry sector.  Another recommended talk.

K

Web Application Firewalls

This talk from Gartner covered WAFs, their functionality, if they are required and possible alternatives;

Software security is improving but hasn’t caught up with the threat landscape.

Attackers have Motivation, Times, Expertise and many targets.

Software security can be improved by better education, QA, SDLC, Frameworks and tools.

  • This helps close the gap, but it still remains
  • Many legacy applications or components will exist for a long time

 Defence in depth approach is required to protect applications;

  • Firewall – allows or blocks traffic based on IP and port – positive security model; Deny all traffic unless explicitly allowed
  • NIPS (Network based Intrusion Prevention System) – Negative security model: Signatures and protocol validation
  • WAF – Identifies and blocks application layer attacks
    • Negative security model – Fixed rules, Blacklist known bad, expert deployment
    • Positive security model – Automatic application behaviour learning, whitelist known good, stratighf forward deployment model
    • Passively block or actively modify traffic to prevent specific attacks

Additional functionality over other network security tools found in many WAFs;

  • Authentication and authorisation
  • ADC functionality
  • SSL termination
  • Anti-scraping
  • Threat intelligence
  • Content inspection, data masking, and DLP

 Differentiators;

  • All have basic signatures and filtering
  • Differ in;
    • Level of granularity
      • policies per application
      • policies per url
      • fully scriptable rule engine vs. high level settings
    • Positive model capabilities
    • Additional functionality
    • Deployment methods

Interest in WAF from a business risk perspective is increasing: 

  • Protects against identified vulnerabilities: Buys time as a quick fix, and provides long-term mitigation for legacy Web applications.
  • Protects against generic classes of attacks, such as SQL injection and brute force.
  • Protects against attacks targeted at your application: Requires active response and granular policy settings.

Also, do not underestimate the benefits of the extras such as performance, caching, authentication..

What are the latest developments in WAF technology?

  • Evolution in data interchange and protocol standard support, such as JSON, XML, GWT, HTML5, SPDY, IPv6
  • User and device validation and integration with Web fraud prevention:
    • True source/real IP identification proxies
    • Geolocation and reputation services
    • Injection/Execution of code for user validation and rudimentary fraud detection
  • Increasing support for Web vulnerability scanners (DAST): “Virtual patching”
  • Support for virtualisation and SaaS Web applications, and cloud delivery options for WAF
  • Improved layer 7 DDoS protection

WAFs, are they viable for the future?

Yes..

  • They provide application layer functionality largely unavailable in many other network based defences.  They should be considered as part of your defence in depth profile for any web applications.
  • Cloud based solutions may become more viable
  • Detection quality will improve as they better understand your applications and also the browsers capabilities
  • Detection engine improvements will be required in order to keep up with evolving threats
    • But must not impact performance!
  • Must scale with the web applications.
    • Virtualisation support is critical

What alternatives are there?

  • Secure coding the the main alternative.  This sounds imple, however…
    • History shows that this fails
      • Bad scalability
      • Much insecure legacy code
      • No control over code – software from vendors, third party code etc.
    • Some functionality may be subsumed into other technology such as ADC (Application Delivery Controller) and CDN (Content Delivery Network) – so watch these spaces.
    • NGFW (Next Generation Firewall) and NGIPS (Next Generation Intrusion Prevention System) are becoming more application aware, but do not and are unlikely to ever deliver full WAF functionality

Recommendations;

  • Determine use case;
    • Compliance – buy “anything”…
    • Security – Buy a leader with low false positives and simple management
    • Application security – buy as part of an application initiative, ensure advanced policies are supported
  • If you have ADCs – asses the capabilities of these
  • Track CDN WAF capabilities
  • Complement with comprehensive monitoring and alerting capabilities

This was a very interesting, vender neutral talk that provides a good intro to WAFs, and some useful thoughts on implementing them and possible future enhancements.  Recommended.

K

Gartner Security and Risk Management conference – Software Defined Networking

This was an introductory talk around Software Defined Networking (SDN) and some of it’s security implications.

What is it?

  • Decoupling the control pane from the data plane and centralising logical controls
  • Communication between network devices and SDN controllers is with both open and proprietary protocols currently – no single standard..
  • SDN controller supports open interface to allow external programability of the environment

– Controller tells each node how to route, vs. current where each node makes it’s own routing decisions.

 How do I enforce network security in an SDN environment?

  • Switch as the Policy enforcement point
    • Switch tells controller it’s seen traffic with certain flow characteristics, Flow controller tells it what to do with the flow, and this information is cached in the local flow table for a specified time.  Another flow arrives and this one is not permitted, so the controller tells the switch to just drop the packets – switch effectively becomes a stageful firewall.
    • Existing controls such as DLP, Firewalls, Proxy servers etc. can all be used with SDN –
      • e.g. someone tries to connect to the internet – flow controller instructs switch to send traffic to the firewall / IPS / DLP server etc.
      • e.g. sending email – no matter where it’s going flow says first point is DLP, then firewall, then onto destination
      • This means devices no longer need to be inline – they can be anywhere on the network.  Flow controller just needs to know where to send certain traffic types!
    • Incoming flows can be treated in the same way
      • Something changes – such that it looks like DDoS – traffic can be routed to the DDoS protection device(s)

What risks does SDN introduce?

  • Risk is aggregated in the controller
    • Malicious or accidental changes could remove some or all of the security protections
  • Integrity of of the Flow Tables must be maintained
    • Switches etc must be managed from controller, not locally
  • Input from applications must be managed and prioritised
    • Application APIs are non standard
    • Who gets precedence?
      • Load balancer vs. security tools when defining traffic flows?

SDN products do exist now.

  • Standards do exist
    • OpenFlow – maintained by Open Networking Foundation
  • Network devices (early days)
    • Open vSwitch
    • Some products from Brocade, Cisco, HP, IBM
  • Controllers (limited maturity)
    • Floodlight (open source)
    • Products from Big Switch Networks, Cisco, HP, NEC, NTT Data, VMware
  • Applications (often tied to specific controllers)
    • Radware and HP produce some security applications

Recommendations;

  • Do not overreact to SDN hype
  • Combine IT disciplines when implementing SDN
    • Don’t forget security!!
  • Determine how existing control requirements can be met with SDN
  • Examine how SDN impacts separation of duties
    • Some similar issues to vitalisation
  • Discuss SDN with your existing security vendors
  • Deploy SDN in a lab or test environment
    • PoC and understand fully before deploying

 

Overall this was an informative and fast paced talk.  As per the speakers recommendations, SDN is a very interesting technology, although it is still in the emerging phase with the majority of deployments currently being in testing or academia.  I wouldn’t yet recommend it for production Datacentre deployments, but I would recommend you become familiar with it, especially if you work in the networking or security fields.

 

K