Simple attacks due to un-patched systems, mis-configurations, ‘standard’ app issues like SQL injection and Cross Site Scripting, phishing links etc. will continue to be the cause of the vast majority of breaches.
Advanced attacks will still make the headlines, even when just in terms of ‘it could have been xx nation using advanced methods’.. Advanced attacks will still be heavily promoted by vendors to sell products and services.
DDoS will continue to get bigger due to the increasing proliferation of insecure connected devices (cue first IoT reference!).
Big data and analytics will continue to be big. Security use cases such as behaviour analysis across all the log data will continue to mature and start to show the value of “big data” from a security monitoring perspective. Will need to work on moving from just behaviour monitoring in logs and alerting, to proactive blocking. ‘Big data’ should start to become the ‘big brain’ that instructs the enforcement tools like IPS and end point agents (they will obviously continue to do their normal job as well).
IoT. I am waiting (note I don’t want there to be one!) for a serious incident in this space. Not just the DDoS stuff, but actual direct harm to people from the hacking of cars or medical equipment. This will shortly be followed by a LOT of knee jerk regulation. No idea if this will happen in 2017 or later. Unless something fundamental changes in how the devices covered in the wide IoT umbrella are developed, deployed and managed it will.
As a side note, we should stop just referring to IoT and start prefixing it with what we are actually referring to, in the same way as you have SaaS, IaaS, GovCloud etc. etc. for cloud ‘things’. IoT is far to broad, and also has far too many different applications that will have vastly different security implications and requirements.
Blockchain. Like IoT, no predictions list would be complete without something blockchain in it. We are already seeing blockchain use cases expanding from currency to DRM and music management etc. This will continue, it’s very much in the ‘hypecycle’ at the moment with everyone rushing to be at the front with use cases and ‘thought leadership’. It would be great to see some really beneficial use cases – could a blockchain be used to track and guarantee that charity finances or food or medical supplies went to the right people?
Automation. Combine environments that are becoming more complex and more dynamic (think DevOps, agile, containers, cloud etc.), increasing numbers of attacks, along with the much reported skills shortage and you have a perfect storm! Automation will be key for organisations to stay secure. Automating more of the basic security tasks will also enable better careers for the SecOps guys – they will have more time to focus on more advanced security issues and hunting for threats etc.
Simplification. In a similar vein to the above, simplification must be a key strategy I’m talking from a security perspective, but this generally makes sense as well! How many security conversations have started or ended talking about implementing a tool / solution? We should be having more conversations about how we can rationalise the tooling we use. How we can meet the security requirements of our organisation with the minimum set of tools and processes. Thus with the maximum simplicity.
Likely millions of things will happen, that we can’t predict, but these are the current themes I am thinking about.
It would be great to hear your thoughts on the key security themes for 2017!
Hopefully it is fairly obvious from the last couple of posts how I think a mobile application can be made ‘secure enough’ to replace hardware security devices and enable many other capabilities from mobiles / tablets etc. However I thought it may be useful to provide an overview of how the detailed components will work together to provide this capability.
Many organisations such as banks have or are already launching payment applications that enable you to make payments with your phone rather then needing your bank card, and of course there are Apple Pay and Samsung Pay etc.
So it’s clear people are becoming comfortable with mobile devices for some use cases, sometimes purely software, sometimes with hardware components involved such as Knox or TEE (Trusted Execution Environment). This is likely helped by the rise of ‘contactless’ payments in many parts of the world.
While hardware components and secure operating system components can form part of a secure mobile application solution, they are by no means a silver bullet. As you still need some part of the application to run in normal, untrusted space, you still face the same problems as if there were no hardware solution in place. What is to stop a malicious application attempting to man-in-the-middle the communications between the secure and insecure environment? Indeed what is to stop a malicious application from just impersonating the secure component to the insecure one?
Hardware based solutions also face challenges around support and different capabilities on different devices.
This is why I have focussed on a software only proposal.
If we get to the point where we can trust and monitor a software only solution, this opens up so many possibilities – as long as you are on a supported O/S version, you can run our secure application(s) on any device, anywhere.
While we have the above mentioned payment applications, there are much wider use cases when we get to the point that we really do trust the mobile application I mentioned some of these in my original post on this topic.
As a recap, these were;
Become your payment instrument. Not like Apple pay that still uses your card in the background, but actually being your card(s).
This can also provide a much richer user experience such as alerting the user every time there is a transaction on the ‘card’
Take payments in stores without the need for a physical card payment solution.
EMV (chip and pin) becomes EMV mobile devices and PIN / other
Replace your drivers license / passport / age card etc. as a valid form of ID.
Enable secure signing of legal / contractual documents.
Combine with technology like RFID and GPS etc. to revolutionise the retail experience.
‘Card not present’ becomes ‘card present’ (the end of ‘Card not present’ fraud!)
Secure mobile banking becomes actually secure and fully featured
Support (or deny) any disputed transactions by providing more detailed information about the device, location and users involved
Become your mobile medical record – no longer do doctors or hospitals have to look up your records (or not find them), you carry a copy with you, that syncs from the central repository when it is updated
I am sure you can think of many others!
So how do the components previously detailed components all come together to proved a secure, monitored environment?
In ‘real time’ there are 5 main components;
The mobile app
Secure decision point
Real time risk engine
The mobile application – this comprises all of the security components deployed to the mobile device, along with the actual application capabilities of course! These components are the key to understanding the security status of the device. They also providing details of behaviour, from things like location to the users activity, and authentication information. These components have the responsibility for securing and monitoring the device and user behaviour, plus ensuring this data and telemetry is securely provided to the secure decision point and monitoring services.
The secure decision point is to provide a central (resilient of course!) control point for all application traffic to pass through. This enables relevant data to be passed to the correct components such as the risk engine and monitoring solution(s). In addition this provides an added layer of protection for the back end application services. Any time the application or user behaviour is deemed unacceptable, the connection can be blocked before it even reaches the back end services.
Real time risk engine enables risk based decisions to be made based on the information from the other security components. The secure decision point, authentication solution and ‘external’ source like threat intel and the big data platform all feed the risk engine. This can be applied to many activities including authentication, user behaviours, and transactions.
Authentication does what the name implies – it authenticates the user, and likely to at least some extent device. The difference between this an ‘traditional’ authentication is that as well as authenticating at logon, and supporting multiple factors and types of authentication, is that it can authenticate constantly in real time. Every time the application is used, information about the device, location, user behaviour etc. is passed to the authentication solution, enabling authentication decisions to be made for any application activity. In addition to providing rich risk information for the risk engine this also enables fully authenticated transactions.
Monitoring, refers in this case to security monitoring of the system components and their data. This provides expert analysis and alerting capabilities to augment the automated processes of the risk engine, authentication solution and security decision point. This may be internal staff, a dedicated SoC (Security Operations Centre), or a dedicate mobile security monitoring centre, or a combination of multiple options.
As you can see, all these components combine to provide an understood and secure environment on the mobile device, backed up by real time monitoring, risk based decisions and authenticated activities.
These ‘real time’ components are further backed up by external feed from intelligence sources, and by analytics performed in the big data platform. This enables learning from the behaviour of users and devices in the environment so that the risk based rules and manual alerting can be refined based on previous and current activities and outcomes.
Depending on a combination of the security requirements for your application, and the resources available, you may not need or want to implement every component here. Overall the detailed environment provides a software only solution that is capable of providing enough security to enable pretty much any activity. I’d love to hear your thoughts, and any experiences of deploying and proving secure mobile applications!
I was reticent to write this post as it could turn into buzzword bingo, and who needs a post suggesting yet another acronym?
However I have been thinking recently that SIEM needs to expand, and the term seems to always get people stuck thinking of traditional / historical SIEM. not where it should be going.
Traditionally SIEM systems collect and analyse ‘security’ events. Now this is awesome if the attacker or malicious insider triggers a ‘security’ event. What if they don’t? The whole issue around the much discussed Advanced Persistent Threat (APT) thype of attack is that they have time, money and resources to ensure they do not trigger obvious security events.
In order to detect and understand the more subtle attacks, or those that are hidden amongst other attacks such as when a large DDoS is used as a diversion need much broader and more in-depth sources of data and correlation abilities than traditional SIEM installations.
Consider malware installed under the context of an administrator that is not picked up by AV (this is easier than you think) then hides itself from general detection. The ops guys may notice an increase in CPU or RAM use on the server, but without the security viewpoint are unlikely to consider root-kit type malware.
Consider data being exfiltrated relatively slowly, increases in network traffic that are not related to a change, but also that cause no performance issues are very likely to be overlooked if only considered from an operational perspective, however this data being viewed from a security standpoint may warrant further investigation.
Consider data moving between systems where it would not normally move, or accounts logging on at unusual times or from unusual places – these may not generate specific security alerts, but can be much more easily spotted and flagged by a log correlation solution that sees everything in the environment.
To me the answer is obvious and has much wider benefits than just for security. SIEM solutions should no longer be in a silo collecting just security data, and operational log collection systems shouldn’t be just for IT operations. A single solution that collects basically all the logs and other pertinent information into some sort of ‘big data’ redundant and scalable storage back end (likely Hadoop based) will provide huge benefit to the organisation.
If the raw log data is also enriched with contextual information such as the CMDB, network information, threat feeds etc. the alerting can be moved from generic alerts to much more organisation specific and prioritised based on the real risk.
Logical separation (and physical if required) along with access controls and agreed roles and responsibilities can be used to ensure that different teams only have access to the data and reports they should, and cannot access data they are not supposed to.
Having a single tool for operations, security and likely business reporting is architecturally more simple, easier to support, and likely lower cost than having multiple tools.
So, the solution is obvious to me, but should it still be called SIEM? I think the security use case of the single log collection solution is likely still SIEM, but on steroids as it has so much more data to correlate and search across and likely much more powerful ways of doing this. However it must not be looked at in isolation and we have to get away from the outdated notion of just collecting and alerting on ‘security’ events.
As an example I was at a presentation recently around big data and SIEM and they did not once mention the broader use cases and benefits, the talk focused purely on the traditional SIEM model, just with a more data.
What do you think? Do we need a new term, if not, how do we move peoples thoughts forwards and away from only thinking of IEM in traditional terms?
How the diverse and rapidly changing set of both structured and unstructured data can play a key role in identifying the increasingly sophisticated threats that organisations face.
Move from reactive to a more proactive stance by actively searching for indicators that something could be amiss.
As an example, the attacks earlier this year on the New York times when it ran a story about China’s prime minister;
Not detected for 4 months
45 different pieces of malware were used, with only 1 being picked up by AV
All employee passwords stolen
Computers of 53 employees accessed
University computers were used as proxies to hide the traffic source.
We have a greater need for security intelligence;
Vulnerabilities / risks
Security and threat feeds
Baselines of behaviour (system and user)
Unstructured data such as free text user inputs, feeds from social media, general news sources etc.
Attackers continuously adapting to leave minimal trace and hide their behaviour in the noise of ‘normal’ activity. Due to the potential huge volumes of data, these systems must be very scalable.
Traditionally SIEM type solutions have focussed on real time alerting that is Proactive, Formalised (standard queries / searches) and fast. This is great, but can it be in depth enough, and is real time attesting always required when searching for long term PAT style attacks?
Move towards adding more Asymmetric / Forensic type capabilities that are more Predictive, Inquisitive, and in depth. These require considerably more skill and in depth understanding to create, and the searches will be much more ‘custom’, but this is the best (only?) way to find the subtle and clever attackers, especially if doing so in a timely manner is required (it is!).
Current SIEM type security processes may look like;
This has a heavy focus on structured data and performing real time correlation to get to a potential incident to investigate.
Moving more into the ‘big data’ world we will enrich this with a lot more data sources, much of it unstructured;
This will potentially also take outputs from the traditional SIEM tool as one of the feeds and enrich them with other data. An example may be where something that may be an issue, but where there isn’t enough detail to act on in the SIEM, this could be added to the ‘big data’ solution and correlated with a much wider data set to find out if it could be a real issue.
The top part of the above diagram (Real-time Processing and Security Operations) is relatively similar to existing SIEM solutions, focusing on real time analysis and processing, just with a potentially larger data set.
The bottom right (Big Data Warehouse, Big Data Analytics and Forensics) focuses on the much more advanced, not real time analysis and forensic type investigations.
Context is key.
You must be able to derive security relevant semantics from elements of the raw data.
There must be the capability to distil the huge volumes of data down to useful and real insights.
Human knowledge must ba able to be added to the solution to improve processing and automate more tasks.
Some key security questions a big data analysis solution will help your organisation answer include;
Another key area these tools can help with is in creating visualisations of attacks and suspicious behaviour. As they will have data from all the systems in the enterprise, along with various external feeds, they can provide visual representations of the behaviour as it moves into that through the organisation.
For me the key consideration is to have one ‘Big Data’ solution that collects all the relevant data for your organisation from traditional log files, through corporate emails to social media and threat feeds.
This also needs to move out of the security realm as people are talking ‘Big Data’ but in reality still have the traditional SIEM mindset. Running a tool like this for security, while the ops guys are also running logging and monitoring tools is massively wasteful in terms of cost, storage, management overhead, and also likely results in situations where some useful information only ends up in one tool, not both.
We need to move forwards to the mindset of an Enterprise ‘Big Data’ solution for sorting and correlating All the business data – logs, emails, external sources, user and system behaviours etc.etc. This solution then has different dashboards, reporting solutions, search headers or whatever for the different use cases such as ops, business users (system performance, investigating transaction issues etc.) and ops. Obviously areas like separation of duties and access controls must be considered here, but I believe this type of solution is the only way for this to really succeed and provide the best value for the business.
On Monday I attended RSA’s first UK Data Security Summit at the Barbican. Unsurprisingly this event had two main focuses;
– ‘Big Data’ – What it is, what it means to businesses and security, and how security can leverage it to look for anomalies and advanced threats.
– Security analytics – The relatively new RSA log correlation and analysis product.
The agenda from RSA was listed as;
Big data and the hype
The changing threat landscape
Cyber criminals, nation states, activists and terrorists
Balancing risk of attack and prevention against ability to perform key tasks
As with my recent Splunk Live! post, the below will be relatively unformatted, but hopefully still of use.
The day started with some keynote talks from Art Coviello, Eddie Schwartz and Andrew Rose;
Art Coviello – Intelligence driven security: A new model using big data
Arts’ talk focused on the rapid changes to the IT environment over the last few years, with predictions for the future as well, then moved into the historic and current security model and what this needs to look like in the future.
70’s – terminals – 1000s users
90’s – PCs – millions users
2010 – Mobile Devices – billions users
2007 – 1/4 Zettabyte
2013 – 2 Zettabytes
2020 – 100 Zettabytes
5* more unstructured than structured data, and growing 3* faster.
2007 – web front end apps
2013 – Theres an app for that
2020 – big data apps everywhere..
2007 – Smart phones
2013 – dawn of really smart phones and smart phone / tablet ubiquity
2020 – Internet of things (everything from fridges to coke machines as well as all the usual phone / pc / tablet etc devices)
2007 – MySpace
2013 – Focus on monetizing
2020 – Total consumerisation of social media: absence of privacy..
2007 – holes
2013 – is there a perimeter?
2020 – no direct control over physical infrastructure..
2007 – Complex intrusion attacks
2013 – Disruptive attacks – can’t launch physical attacks over internet yet, but can be very disruptive
2020 – Destructive attacks? with no physical / user interaction required?
Historic security model;
Static / signature based
Firewall, IDS, AV etc – all reactive, don’t play together or support each other
Dynamic / agile
leveragable / contextual
Look for anomolys, be more heuristic / intelligent, work together – correlate events across the enterprise
Impediments to change;
Budget inertia: reactive model
70% on prevention (likely more like 80 % in many firms)
20% Detection and monitoring
Skilled Personel shortage
Information sharing at scale – industry groups, sharing data of attacks and breaches etc at ‘wire speed’
Some commentary about archer, silver tail etc. RSA has bought or invested in
Look at security maturity model;
Stage 1 – Unaware (wish security would go away, install a box to fix it all)
Stage 2 – Fragmented (compliance gathering – focus on box ticking to get compliance rather than doing security right)
Stage 3 – Top Down (security understood but driven from management down, not yet pervasive)
Stage 4 – Pervasive (good security team, work with c-level on budgets etc)
Stage 5 – Networked (working across the business and integrated with the business)
Big data transforms security;
Scalable to analyse all data
generates a mosaic of information
enables view of attacks in real time
Need this detailed analysis in order to prevent / see sophisticated attacks such as man in the middle and man in the browser
Intelligence driven security needs to be resilient, feed into controls and in and out of GRC stack (grc feeds into and educates controls. controls feed into GRC to confirm compliance)
Eddie Schwartz – Embracing the uncertainty of advanced attacks with big data
Pecota forcasts – analytics platform used by bookies to work out odds one sports / sports players – baseball – movie – money ball.
– ‘big data analytics’ changed the way baseball players were assessed and consequently paid..
Facebook data mines images as well as text on your page to drive targeted advertising
Amazon etc. – preference engine – you bought this, you want these..
* They are information rich and using high quality analytics. Why are we not using data like this in security?
Why? – too much time having to say yes we are ok, yes we pass xx audit..
Attackers do not have these checklists – they will work hard to breach any opening regardless of whether you are complaint with whatever regulation..
Read ‘the signal and the noise‘ – Nate Silver – why so many predictions fail and some don’t.
The signal is truth, the noise is what distracts us from the truth.
How much do we really know about our adversaries?
Are we researching the tools, techniques and processes of our adversaries
Do we know who they are?
Insiders, hackers, hactivists, criminal organisations, nation states etc.
Do we know what they look like?
Old world (SIEM) – finite, rule sets, wait for rule to be breached
New world – infinite – unknown unknowns, uncertainty, hackers may look like legitimate users – what signs can we look for to identify them?
Do we understand the ‘Kill Chain’ – Prepare, Infect, Interact, Exploit
Cost to remediate goes up dramatically as you move along the chain
detection sweet spot – when they first exploit / attempt to exploit – they have to reveal themselves, so fast detection here will catch / print before data exfilitration.
Need to move to more spend and more intelligence on ‘internal’ protection / detection / capture – away from the traditional perimeter.
What are your drivers for IT security investment?
34% compliance, 16% audit
ONLY 6% strategy!
Big data transforms security – 4 areas for shift..
Comprehensive visibility – not just event logs – what are my critical processes, what information do I need to see to understand if they are at risk.
Actionable intelligence – must be available in a timely manner
Agile analytcs – security environment must be able to change as the environment changes – your environment is at least somewhat unique, also threat landscape changes
Centralised incident management – can security teams follow an incident from end to end? – many point solutions.. Do logs all go to one place, can they be effectively analysed?
2. Intelligence driven security
Ah-hoc – Bystander – End User – Creator; Crawl – Walk – Run – Advanced – World Class
Monitoring and detection, incident response, threat intelligence, systems and analytics; Where should we be – risk based – do you need to be world class in everything? Where do we need to focus, what are our risks?
Critical Incident Response Centre (CIRC) – Cyber threat intelligence, Advanced tools, tactics and analysis; Critical Incident response team, Advanced specialists
read – Big data fuels intelligence driven security – RSA white paper
US – Data sharing bill – both businesses and liberal groups have objected.
how to share without compromising privacy.
criminals already violating our privacy every day
who should protect our privacy – benign government, corporations, criminals?
laws protecting customer privacy can make it hard not to breach laws protecting employee privacy in the EU?
Andrew Rose – principle analyst – security and risk management – Forester – ‘An external perspective’
Information classification – how mature
26% have a policy that’s widely ignored, 28% have a policy for some data or systems..
The world we live in (largely as previous presentations)
Increasingly capable attackers (threat is real – activists, china etc..)
Budgets relatively static or slow growth, enough for triage of known issues, not whole treatment and improving security posture.
ROI – hard to define / prove – if not breached are we good or just lucky. No good model seems to exist yet.
Yes rather than no security culture – have to work with business and enable – increase risk and complexity to deal with, but not necessarily staff and budget..
Competitive recruitment environment
Even the best firms have flawed security – e.g. RSA breach – have to prepare to fail!
Forester and IBM reports has IT at the top of the list of most important reasons for business success.
However business and IT (business especially) do not rate the success / competency of IT very highly – not agile, can’t accommodate change, can’t deliver projects on time etc.
RSA yearly IT security challenges included;
Third highest issue (76%) – changing business priorities
Forth (74%) – day to day tasks taking too much time
8th (55%) lack of visibility of security – fixing this one will likely improve other issues at lot.
adoption of ISO / cubit etc not helping these keep getting higher up the issues scale
Business innovation does not slow down because of security threats…
Complexity vs. manual ability – can better analytics help?
Vendors – vendor space is buzzing..
security commercialisation is in full swing
But what are the differentiators – everyone users the same buzzwords to sell products (e.g. big data, threat intelligence etc.)
need innovation, not re-hash or updates
services, not more hardware
how many products required to ‘solve’ security
what do I need now
what order should I buy them
what is the value / roi?
how much resource does it take to manage?
too many niche products – e.g. IAM, remove admin rights etc. Need a ‘BIG’ tool / solution, to solve many / most issues and integrate existing products / solutions.
5% get great value, 30% have not implemented, 65% get little or limited value
So is Big data the solution?
Big data just means lots of high velocity, structured and unstructured data – it is there to be used – so it is what you do that counts with it, not it in its self (my comment, not speakers)
supply chain complexity
internet of things
For me same conclusion as before – need something to aggregate and bring all the data together from apps, security tools, systems and then analyse it. intelligent, fast correlation – look for real connections and real relationships – be mindful of coincidences in the noise.
2 books – anti fragile, signal to noise.
Common pitfalls –
starting with the data – need context and understanding as well.
overlooking the value of metadata. data tagging increases value of data
believing more data is better
think simplicity and actionability
Take away points;
Understand and identify your data
information classification is key – get this accepted and rolled out across the business
Be ‘hypothesis-led’ – think of what you cold do, not just what you know – then see if you can find the data to achieve it
Look for business partners for any big data initiative – again – one engines / dwh etc.
I’ll complete my write up of the day shortly, I hope you’re finding it useful.
I attended the Splunk Live! London event last Thursday. I am currently in the process of assessing Splunk and it’s suitability as a security SIEM (Security Information and Event Management) tool in addition to general data collection and correlation tool. During the day I made various notes that I thought I would share, I’ll warn you up front that these are relatively unformatted as they were just taken during the talks on the day.
Before I cover off the day, I should highlight that I use the term SIEM to relate to the process of Security Information and Event Management, NOT SIEM ‘tools’. Most traditional tools labelled as SIEM as inflexible, do not scale in this world of ‘big data’ and are only usable by the security team. This for me is a huge issue and waste of resources. SIEM as a process is performed by security teams every day and will continue to be performed even when using whatever big data tool of choice.
The background to my investigating Splunk is that I believe a business should have a single log and data collection and correlation system that gets literally everything from applications to servers to networking equipement to security tools logs / events etc. This then means that everyone from Ops to application support, to the business to security can use the same tool and be ensured a view encompassing the entire environment. Each set of users would have different access rights and custom dashboards in order for them to perform their roles.
From a security perspective this is the only way to ensure the complete view that is required to look for anomalies and detect intelligent APT (Advanced Persistent Threat) type attacks.
Having a single tool also has obvious efficiency, management and economies of scale benefits over trying to run multiple largely overlapping tools.
Onto the notes from the day;
Volume – Velocity – Variety – Variability = Big Data
Machine generated data is one of the fastest growing, most complex and most valuable segments of big data..
Real time business insights
Search and investigation
Enables move from ‘break fix’ to real time operations insight (including security operations).
GUI to create dashboards – write quires and select how to have them displayed (list, graph, pie chart etc.) can move things around on dashboard with drag and drop.
Dev tools – REST API, SDKs in multiple languages.
More data in = more value.
My key goal for the organisation – One log management / correlation solution – ALL data. Ops (apps, inf, networks etc.) and Security (inc PCI) all use same tool with different dashboards / screens and where required different underlying permissions.
Many screens and dashboards available free (some like PCI and Security cost) dashboards look and feel helps users feel at home and get started quickly – e.g. VM dashboards look and feel similar to VMware interface.
another example – windows dashboard – created by windows admins, not splunk – all the details they think you need.
Exchange dashboard – includes many exchange details around message rates and volumes etc, also includes things like outbound email reputation
VMware – can go down to specific guests and resource use, as well as host details. (file use, CPU use, men use etc.)
Can pivot between data from VMware and email etc. to troubleshoot the cause of issues.
These are free – download from spunkbase
Can all be edited if not exactly what you need, but are at least a great start..
Developers – from tool to platform – can both support development environments and be used to help teach developers how to create more useful log file data.
Security and Compliance – threat levels growing exponentially – cloud, big data, mobile etc. – the unknown is what is dangerous – move from known threats to unknown threats..
Wired – the internet of things has arrived, and so have massive security threats
Security operations centre, Security analytics, security managers and execs
Look for anomalies -things someone / something has not done before
can do things like create tasks, take ownership of tasks, report progress etc.
When drilling down on issues has contextual pivot points – e.g right click on a host name and asset search, google search, drill down into more details etc.
Even though costs, like all dashboards is completely configurable.
Splunk App for PCI compliance – Continuous real time monitoring of PCI compliance posture, Support for all PCI requirements (12 areas), State of PCI compliance over time, Instant visibility on compliance status – traffic lights for each area – click to drill down to details.
Security prioritisation of in scoop assets
Removes much of the manual work from PCI audits / reporting
Application management dashboard
spunk can do math – what is average stock price / how many users on web site in last 15 minutes etc.
Real time reporting on impact of marketing emails / product launches and changes etc.
for WP – reporting on transaction times, points of latency etc – enable focus on slow or resource intensive processes!
hours / days / weeks to create whole new dashboards, not months.
Links with Google earth – can show all customer locations on a map – are we getting connections from locations we don’t support, where / what are our busiest connections / regions.
Industrial data and the internet of things; airlines, medical informatics (electronic health records – mobile, wireless, digital, available anywhere to the right people – were used to putting pads down, so didn’t get charged – spunk identified this).
Small data, big data problem (e.g. not all big data is a actually a massive data volume, but may be complex, rapidly changing, difficult to understand and correlate between multiple disparate systems).
Barclays – 10TB security data year.
HPC – 10TB day
Trading 10TB day
VM – >10TB year
All via splunk..
DataShift – Social networking ‘ETL’ with spunk. ~10TB new data today
Afternoon sessions – Advanced(isn) spunk..
– Can create lookup / conversion tables so log data can be turned into readable data (e.g. HTTP error codes read as page not found etc. rather than a number) This can either be automatic, or as a reference table you pipe logs through when searching.
– As well as GUI for editing dashboards, you can also directly edit the underlying XML
– Can have lots of saved searches, should organise them into headings or dashboards by use / application or similar for ease of use.
– Simple and advanced XML – simple has menus, drop downs, drag and drop etc. Advanced required you to write XML, but is more powerful. Advice is to start in simple XML, get layout, pictures etc sorted, then convert to advanced XML if any more advanced features are require.
– Doughnut chart – like a pie chart with inside and outside layers – good if you have a high level grouping, and a lower level grouping – can have both on one chart.
– Can do a rolling, constantly updating dashboard – built in real time option to refresh / show figures for every xx minutes.
gives HA, gives fidelity, may speed up searches
Advanced admin course;
can accelerate a qualifying report – more efficiently run large reports covering wide date ranges
must be in smart or fast mode
Lots of free and up to date training is available via the Splunk website.
Splunk for security
Investigation / forensics – Correlation, fast to root cause, look for APTs, investigate and understand false positives
Splunk can have all original data – use as your SIEM – rather than just sending a subset of data to your SIEM
Unknown threats – APT / malicious insider
“normal” user and machine data – includes “unknown” threats
“security” data or alerts from security products etc. “known” security issues.. Misses many issues
Add context – increases value and chance of detecting threats. Business understanding and context are key to increasing value.
Get both host and network based data to have best chance of detecting attacks
Identify threat activity
what is the modus operandi
who / what are most critical people and data assets
what patterns and correlations of ‘weak’ signals in normal IT activities would represent abnormal activity?
what in my environment is different / new / changed
what deviations are there from the norm
Sample fingerprints of an Advanced Threat.
Remediate and Automate
Where else do I see the indicators of compromise
Remediate infected systems
Fix weaknesses, including employee education
Turn the Indicators of Compromise into real time search to detect future threats
– Splunk Enterprise Security (2.4 released next week – 20 something april)
– Predefined normalisation and correlation, extensible and customisable
– F5, Juniper, Cisco, Fireeye etc all partners and integrated well into Splunk.
Move away from talking about security events to all events – especially with advanced threats, any event can be a security event..
I have a further meeting with some of the Splunk security specialists tomorrow so will provide a further update later.
Overall Splunk seems to tick a lot of boxes and looks certainly taps into the explosion of data we must correlate and understand in order to maintain our environment and spot subtle, intelligent security threats.
This week I am at the Cloud Security Alliance (CSA) congress in Orlando. The week has been pretty hectic with meeting people and receiving an award etc. I have made some notes from a few of the talks so will share those here, although they are not as comprehensive as the notes I made at the RSA conference a few weeks ago.
Regarding the conference itself, this has been a bit of a busman’s holiday as I have had to take this week as annual leave due to it not being directly linked to my current day job and the fact it’s my third conference in a couple of months.. On a brighter note the CSA actually paid for me to come out here to receive my award, which was an extremely cool gesture.
It terms of organisation and content this one falls somewhere between the service technology symposium and the RSA conference, but much nearer the RSA end of the scale. The conference is obviously a lot smaller than RSA, but was surprisingly well organised. Content we also pretty good, a few too many vendor product focussed talks for my liking, but this is a new conference that has to be financially viable as well as interesting. Overall I would definitely recommend coming to this next year if you have any interest in cloud security.
As with the previous conferences I’ll split the day’s notes into a couple of posts. In order to get these up now rather than waiting until I get home and finding time to write things up, so please be understanding if some of them are not perfectly formatted or as fully explained as they could be. I will be creating more detailed follow up posts for some of the key issues that have been discussed.
Opening Keynote 1 – The world is changing; we must change with it!
– What do you do if you have a security incident in a faraway country? Your Law enforcement / government has no jurisdiction.. eBay has directly indicted over 3000 people globally due to the security / incident response and investigation teams.
– Have to create capabilities to share vital information globally
– Computation is changing
Exponential data growth and big data
– Adversary is professional, Global and Collaborative
We are all fighting alone
– Threat continues to increase
– Business environment is changing
– Change the way you think!
Can we make attack data anonymous enough that is can be shared in a meaningful way to help others and improve overall understanding and security
– Look at things like CloudCert
Computing is changing;
– Cloud computing is just the beginning
Shared datacentres, networks, computers etc..
– Driven by cost savings and need to be competitive in a global marketplace
– Virtualisation – Mobile – BYOD (explosion of devices)
– Increasing reliance on Browser
Secure Browser ‘App’ vs. URL (Apps vs. things like HTML5)
Do we start building Apps / Browsers dedicated to specific tasks for critical / risky tasks such as banking, online shopping with card details etc. This would stop XSS.
Exponential data growth – Big data
– In 2010 humanities data passed 1 zettabyte – (1 with 21 zeros after it).
– Estimated volume in 2015 – 7.9ZB
– Number of servers expected to grow by 10* over the next 10 years.
Malware 26M in 2011 – 2.166M/mo. – 71,233/day. 73% Trojans.
Application lifecycle – how long will the legay apps you use be around?
First attacks on O/S
First mobile drive by downloads
Malicious programs in App stores
First mass Android worm
– Attacks built in the Cloud are invisible, and inexpensive
Role of cloud providers in detecting attack development – what are the implications of this – to prevent attacks CSPs would need some visibility around what you are doing.. Would you want this?
Business Environment Changes
– Drive to innovate
Scrums, agile computing initiatives change the way we work
Security needs to work in a more agile way
– Rapid delivery of features and functions
Build securely – not build and test
– Impact of Intense, Global competition
– SMBs are the foundation of US recovery but need help
– Blurring of home/personal and work
Six Irrefutable Laws of information Security;
Information wants to be free
Code wants to be wrong
Services want to be on
Users want to click
Even a security feature can be used for harm
The efficacy of a control deteriorates with time
The implications for Cloud Security, shared infrastructures and platforms, virtualisation, the proliferation of mobile devices etc. are clear..
Even small or seemingly less interesting companies are now targets – criminals want as much information as they can get.. Again highlights the point that you will be hacked..
What do we need to do? – We need intelligence!
Director of Georgia Tech Information Security Centre, 2011 –
“We continue to witness cyber-attacks of unprecedented sophistication and reach, demonstrating that malicious actors have the ability to compromise and control millions of computers that belong to governments, private enterprises and ordinary citizens.”
We have limited resources so what should we spend our time and money on – malware defence? Mobile? Big Data?
What is needed to get where we need to be?
– Global perspective
– Global Information Sharing
– Intelligence based security
Strategy and Budget
– We MUST eliminate the obstacles!
Global Information Sharing
– We have been trying for decades
– How do we establish trust
Methods to make data anonymous
Attack data sharing
– Who shares?
Needs of SMBs
– Role of Governments (pass treaties around data sharing and cross boundary working)
– Benefits go far beyond incident response
Incident response in the Cloud;
– Where is your data (does it ever get moved due to problems, bursting within the CSPs infrastructure etc. – need very clear contracts)
– Consider model you use – IaaS / PaaS / SaaS and what this means
– Network control
– Log correlation and analysis – where are these, who owns them, who can access them..
– Roles and responsibilities
– Access to event data, images etc. When will you find out about issues and breaches?
– Application functioning in the cloud – consider impacts of applications running is shared and / or very horizontally scalable environments.
– Virtualisation benefits and issues
– Capabilities and limitations of your provider
– CSA and Cloud CERT
– Government initiatives
– Private initiatives
Breaches can impact all of us, finding ways to work together and share data is critical. Cloud is relatively new – we can make a difference and improve this moving forwards.
Recommendation to read the upcoming book from the CISO of Intel (Malcolm) around security that covers various areas including – understanding the world and providing a reasonable level of protection (inc. BYOD, need to be agile etc.)
– Remove Obstacles
– Build subject matter expertise
– Global sharing is critical to success
Who will attack you, using what methods in 2013?
Where should you spend your time / money?
Intelligence based security
– Security sophistication must keep pace with attack sophistication!
Big data is everywhere, not just Facebook, Google and CERN. Organisations from the police with cameras constantly taking photos of license plates to log data from corporate systems and web sites. Many companies are now having to deal with or plan to deal with big data in order to understand their systems, their customers, and their users.
What is driving this for ‘ordinary’ organisations?
– Increasingly complex and virtualised IT infrastructures
– Workload mobility
– Bring your own device / computer
– Cloud computing
All require increasing amounts of data to be collected and aggregated in order for an organisation to understand and ensure compliance of their environments.
Cloud computing is both aiding this by making the storage and compute power available to any business that has to deal with big data, and driving this through its scale, virtual and always on nature.
How do we ensure the security and understanding of these complex environments? We must build security onto to overall cloud and application architecture. Realise that the cloud has multiple ‘flavours’ from IaaS to SaaS and these are not all the same from a design and architecture perspective. Stop talking and thinking about the cloud as just ‘the cloud’.
From an infrastructure perspective, cloud data centres are fractal, you need to understand what your assets are, but also realise many are the same for example storage and compute. You can monitor all your compute nodes with the same method. Monitoring needs to be in real time and to have analysis and intelligence built in.
If you are running web applications you need to understand how many you have, where they are and how they are being used. Need to look at hardening and understanding this perimeter and correlate logs across these environments. How do we manage code issues and potential exploits and varying methods of authentication? Your developers working on new code and functionality, your support staff may not have enough code experience. Do we need a new breed of operations support with reasonably in depth coding abilities?
Was Philippe referring to DevOps here? This is newish, but not a new idea, many organisations are already using or setting up DevOps teams with the skill sets that were talked about.
Mobile devices are also driving both big data and management challenges to organisations. We need to ensure they are all monitored and managed; Single Sign on, Privacy, Corporate policies. How do we do this to 100s / 1000s / 1000000s of thin devices that cannot have thick very thick applications installed on them? Cloud based services for bath device management and aggregation of the collected data can provide these solutions and scale as required.
How do we ensure security remains ‘front and centre’ as we move to the cloud and scale up? Many existing enterprise point solutions do not scale enough or integrate well enough with the cloud. This is being solved by providing managed security services from the cloud; Security as a Service (SecaaS). Obviously blowing my own trumpet here, but this neatly links to my research with the Cloud Security Alliance on SecaaS!
For me the key message of this talk is that real-time ‘Big Data’ is a key element of tomorrow’s security. We need to understand the implications of this and plan our security strategy to take advantage of this and the insight it will bring.
Keynote 2 – The struggle for control of the internet
Misha Glenny – Author and Journalist
Control of the internet focusses on the debate between security and privacy vs. demand for freedom. The US identifies four areas that need to be managed and prevented; Crime, Hactivision, Warfare, and Terrorism.
How do we balance the need for people to have freedom with the needs for safety and protection online? Is the internet morally neutral?
Crime (cybercrime) quickly took advantage of the internet, from card detail sales sites such as Carderplanet and DarkMarket. Carderplanet was set up >11 years ago. Both these sites have since been taken down, but they paved the way for much more sophisticated criminal organisations.
Criminals now spend a lot of time watching organisations like SOCA and the FBI in order to understand them and anticipate their next moves. So while those trying to catch the criminals are watching them, they in turn are being watched! Hackers have accessed private police files to monitor current investigations and delete intelligence records etc.
There have actually been worldwide ‘carder’ and other criminal activity conferences. For example Carderplanet organised the first worldwide carder conference in 2002. The invite to this conference also alluded to the fact that Carderplanet had a deal with the FSB (Russian secret service) would not interfere with their ‘work’ as long as they did not attack financial institutions, and if they would perform attacks on behalf of the Russian government / secret service as required.
The lines between government spies and criminals are becoming increasingly blurred.
Currently the UK secret service (Mi6 / Mi5) is dealing with ~500 targeted attacks every day. This is up from ~4 per year 10 years ago! The international spend in the west on cyber security is currently around $100 Billion per year. This is set to double over the next few years.
The west wants to work with China and Russia to improve the situation; however they want to be allowed to manage the web within their borders in any way they like if they are to cooperate. This obviously has issues with preventing freedom of speech.
Will the Web brak down into massive intranets? Iran has already stated its intent to disconnect itself from the Web and set up just such an internal intranet. China and Russia want to control and largely segregate their internal users from the rest of the Web.
We need original thinking to resolve these issues!
Cloud computing’s impact on future enterprise architectures
This talk was fairly light and I didn’t make a huge amount of notes, but thought there were a few points worth noting;
Definitions and boundaries are changing. Instead of defined boundaries we are used to around traditional architectures whether they are hosted locally or at a data-centre we are moving to much more fluid and interconnected architectures. Consider personal cloud, private cloud, hybrid cloud, extended virtual data-centres, consumerism, BYOD etc. The cloud creates different, co-existing architectural environments based on combinations of these models.
Consider why you should move to the cloud, which characteristics are important for your organisation such as;
– Elastically scalable
– Self service
– Measured services
– Virtualised and dynamic
– Reliability (SLAs, what happens when there are issues etc.)
– Economic benefits (cost reduction – TCO, and / or better resiliency)
Do you understand any potential risks;
– What are the security roles and responsibilities? –
IaaS – you
BPaaS (business process as a service) – Them
Sliding scale from IaaS – PaaS – SaaS – BpaaS
– Where is your data?
Your business and regulatory requirements
Jurisdictional rules – who can access your data
Legal / jurisdictional issues amplified
For me some of this talk was outdated, with a lot of focus on where is your data; While where is my data is a key question, there was too much focus on the fact your data will be anywhere in the world with global CSPs, when most big players now offer guarantees that you data will stay within defined regions if you want it to.
So, what does this mean for your ‘future’ cloud based enterprise architecture principles, concepts etc.?
– Must standardise on ‘shared nothing’ concept
– Standardise on loosely coupled services
– Standardise on ‘separation of concerns’
– No single points of failures
– Multiple levels of protection / security
– Ease of <secure> access to data
– Security standards to protect data
– Centralise security policy
– Delegate or federate access controls
– Security and wider design patterns that are easy to adopt and work with the cloud
Combining these different architectural styles is a huge challenge.
Summary – Dealing with multiple architectures, multiple dimensions and multiple risks is a key challenge to integrating cloud into your environment / architecture!
The slides from this talk can be downloaded here;
SOA (Service Orientated Architecture) environments are a big data problem / Big data and its impact on SOA
Outside of some product marketing for Splunk, the premise of these two talks was basically the same, that large SOA environments are complex, need a lot of monitoring and create a lot of data.
Splunk is incidentally is a great open source product for log monitoring / data collection, aggregation and analysis / correlation. Find out more about it here; http://www.splunk.com/
SOA – great for agility, but can be complex – BPEL, ebXML, WSDL, SOAP, ESB, XML, BPM, UDDI, Composition, loose coupling, orchestration, data services, business processes, XML Schema, registry etc.. This can generate a huge amount of disparate data that needs to be analysed in order to understand the system. Both machine and generated data may need to be aggregated.
SOA based systems can themselves generate big data!
We all know large web based enterprises such as Google and Facebook etc. have to deal with big data, but should you care? Many enterprises are now having to understand and deal with big data for example;
Retail and web transaction data
GPS in phones
Log file monitoring and analysis
The talks had the following conclusions;
– Big data has reached the enterprise
– SOA platforms are evolving to leverage big data
– Service developers need to understand how to insert and access data in Hadoop
– Time-critical conditions can be detected as data inserted in Hadoop using event processing techniques – Fast Data
– Expect big data and fast data to become ubiquitous in SOA environments – much like RDBMS are already.
So I’d suggest you become familiar with what big data is, the tools that can be used to handle and manage it such as Hadoop, MapReduce and PIG (these are relatively big topics in themselves and may be covered at a later date)
The slides from these talks can be downloaded from the below locations;
Time for delivery; Developing successful business plans for cloud computing projects
This talk covered some great points around areas to consider when planning cloud based projects. I’ll capture as much as I managed to make notes on, as there was a lot of content for this one. I’d definitely recommend checking out the slides!
Initial things to consider include;
– Defining the link between your business ecosystem and the available types of cloud-enabled technologies
– Identifying the right criteria for a ‘cloud fit’ in your organisation. (operating model and business model fit)
– Strategies and techniques for developing a successful roadmap for the delivery of cloud related cost savings and growth.
– Mobility – any connection, any device, any service
– Social Tools – any community, any media, any person
– Cloud – computing resources, apps and services, on demand
– Big Data – real time information and intelligence
In a nice link with the talk on HPC in the cloud, this one also highlighted the competitive step change that cloud potentially is; small companies can have big company levels of infrastructure, scalability, growth etc. Anyone can access enterprise levels of computational power.
Cloud computing can be used to drive a cost cutting / management strategy and a growth / agility strategy.
Consider your portfolio and plans – what do you want to achieve in the next 6 months, next 12 months etc.
When looking at the cloud and moving to it, what are the benefit cases and success measures for your business? These should be clearly defined and agreed in order for you to both plan correctly, and clearly understand if the project / migration has been a success.
What is your business model, and which cloud service business models will best fit with this? What is the monetization strategy for your cloud migration project; Operational, Growth, Channel etc. Initially cloud based projects are often driven by cost saving aspirations, however longer term benefits will likely be better if the drivers are better and faster, cost benefits (or at least higher profits!) will follow. To be successful, you must decide and be clear on your strategy!
As with all projects, consider your buy vs. build options.
Is IT a commodity or something you can instil with IP? Depending on your business you will be at different places on the continuum. Most businesses can and should derive competitive advantage by putting their skills and knowledge into their IT systems rather than using purely SaaS or COTS solutions without at least some customisation. This of course may only be true for systems relating to your key business, not necessarily supporting and administrative systems.
Cloud computing touches many strategies – you need a complete life-cycle 360 approach.
– Storage strategy
– Compute strategy
– Next gen network strategy
– Data centre strategy
– Collaboration strategy
– Security strategy
– Presence strategy
– Application / development strategy
Consider the maturity of your services and their roadmap to the cloud;
Service Management – Service integration – Service Aggregation – Service Orchestration
This talk highlights just how much there is to think about when planning to migrate to, or make use or, the cloud and cloud based services.
The talk also highlighted a couple of interesting things to consider;
Look up ‘The Eight Fallacies of Distributed Computing’ from 1993, and ‘Brewer’s Theorem’ from 2000 (published in 2002) to understand how much things have stayed the same just as much as how much they have changed!