RSA Conference Europe 2012 – They’re inside… Now what?

Eddie Schwartz – CISO, RSA and Uri Rivner – Head of cyber strategy, Biocatch

Talk started with some discussion around general Trojan attacks against companies, rather than long term high tech APTs, with the tagline; If these are random attacks.. We’re screwed!

Worth checking the pitch, but there was a series of examples from the RSA lab in Israel of usernames and passwords and other data that Trojans had sent to C&C servers in Russia.  These included banks, space agencies, science agencies, nuclear material handling companies etc.

So what to the controllers of these Trojans do with the data?  Remember these are random attacks collecting whatever personal data they can get, not specific targeted attacks.  A common example is to sell the data, you can find examples of the criminals on message boards etc. offering banking, government and military credentials for sale.

Moving onto examples of specifically targeted attacks and APTs..  Examples of targeted attacks include; Ghostnet, Aurora, Night Dragon, Nitro and Shady RAT.  These have attacked everything from large private companies, to critical infrastructures to the UN.  All of the given examples had one thing in common – Social Engineering.  Every one used Spear Phishing as their entry vector.

From this I think you need to consider – Do you still think security awareness training shouldn’t be high on your organisations to-do list?

The talk went onto discuss Stuxnet and Duqu, along with their similarities and differences, largely what was captured in my last post.  The interesting observation here was their likely different plaes in the attack process.  Stuxnet was at the end and the actual attack, Duqu likely much earlier in the process as it was primarily for information gathering.

A whole lot more targeted malware examples were given including Jimmy, Munch, Snack, Headache etc.  Feel free to look these up if you want to do some further research.

A very recent example of a targeted attach that was only discovered in July of this year is VOHO.  This campaign was heavily targeted on Geopolitical and defence targets in Boston, Washington and New York.  It was a multistage campaign heavily reliant on Javascript.  While focused on specific target types the attack was very broad, hitting over 32000 unique hosts and successfully infections nearly 4000.  This is actually a very good success rate, with the campaign no doubt considered a success by those instigating it..

In light of this evidence it is clear we need a new security doctrine.  You will get hacked despite your hard work, if it has not yet happened, it will..  Learn from the event, an honest evaluation of faults and gaps should result in implements.

Things to consider as part of this new doctrine;

–          Resist – Threat resistant virtualisation, Zero day defences

–          Detect – Malware traces, Big data analytics, behavioural profiling

–          Investigate – Threat analysis, Forensics and reverse engineering

–          Cyber Intelligence – Threat and Adversary intelligence

Cyber Intelligence was covered in some more specific details around how we can improve this;

–          External visibility – Industry / sector working groups, Government, trusted friends and colleges, vendor intelligence;

  • Can this information be quickly accessed?  For speed should be in machine readable format, but use whatever works!

–          Internal visibility – Do you have visibility in every place it it needed, HTTP, email, DNS, sensitive data etc.

  • Do you have the tools in place to make use of and analyse all of these disparate data sources

–          Can you identify when commands like NET.. and schedulers etc. are being used?

–          Do you have visibility of data exfiltration, scripts running, PowerShell, WMIC (Windows Management Instrumentation Command-line) etc?

–          Do you have the long term log management and correlation in place to put all the pieces of these attacks together?

Summary recommendations and call to action..

–          Assume you are breached on a daily basis and focus on adversaries, TTPs and their targets

–          Develop architecture and tools for internal and external intelligence for real-time and post-facto visibility into threats

–          Understand current state of malware, attack trends, scenarios, and communications

–          Adjust security team skills and incident management work flow

–          Learn from this and repeat the cycle..

Next steps (call to action!);

–          Evaluate your defence posture against APTs, and take the advice from the rest of this post

–          Evaluate your exposure to random intrusions (e.g. data stealing Trojans), and take the advice from the rest of this post

Useful presentation from a technical and security team standpoint, but completely missed the human and security awareness training aspect – despite highlighting that all the example APTs used spear phishing to get in the door.  I’d recommend following all the advice of this talk and then adding a solid security awareness program for all employees and really embedding this into the company philosophy / culture.

K

RSA Conference Europe 2012 – Duqu, Flame, Gauss: Followers of Stuxnet

Boldizsar Bencsath, CrySys Lab

Stuxnet – 2010 – modified PLCs (Programmable Logic Controllers) in uranium enrichment facilities.  Most likely government backed and dubbed ‘the most menacing malware in history’ by wired magazine.

Duqu – discovered by CrySys Lab in the wild when responding to an incident.  Stuxnet destroyed Iranian centrifuges, Duqu is for information gathering.

However they are very similar in terms of design philosophy, internal structure and mechanisms, implementation details and the effort that would have been required to create them.  Additionally Duqu also used a digitally signed driver as with Stuxnet.

Duqu named as it creates temp files starting with the string ~DQ.

Actual relationship between the two and who created Duqu is not known, but suspected that Stuxnet creators at least had some involvement in creating Duqu.

Duqu is a very clean design that automatically downloaded only the modules it required from Command and Control (C&C) servers.  Thus investigators do not know the full extent of its capabilities as they can only see the modules that were downloaded to the targets they investigated.  The Duqu C&C servers may have hosted the Stuxnet PLC code for example.

The components of Duqu that were discovered included;

–          Registry data to point to components

–          Keyloggers

–          Multiple encrypted payloads

–          Pointers to how to decrypt the payloads

–          Of note different payloads were encrypted with different methods

From a CrySys Lab viewpoint they;

–          Discovered and named Duqu

–          Freely shared thei knowledge with AV vendors and Microsoft

–          Identified the dropper

–          Developed the Duqu detector toolkit

  • Focusing on heuristic anomaly detection
    • AV tools already have basic signature detection so no reason to duplicate this
  • Detects live Duqu instances and remnants of old ones
  • Also detects Stuxnet
  • Open source for anyone to use

Moving into 2012 another variant / descendant of Stuxnet / Duqu has been discovered.  This is known as Flame / Flamer / sKyWIper.  Flame has been described as the ‘most complex malware ever found’, its core component is 6MB in size.

Flame appears to follow the same main requirements / specifications to Duqu and Stuxnet, but has been developed in a very different way, using different programming languages etc.  Flame is another information stealer malways with functionality such as;

–          activating microphones and web cameras

–          logging key strokes

–          taking screen shots / screen scraping

–          extracting geolocation data from images

–          sending and receiving commands and data through Bluetooth, including enabling bluetooth when it is turned off

Flame infects computers my masquerading as a proxy for windows and has infected 1000s of victims mostly across Iran and the Middle East.

Gauss is another information stealing malware example that is based on the Flame platform.  This was also discovered in 2012, but infections date back to September 2011, again 1000s of victims, mainly in Lebanon, Israel and the Palestinian Territory.

Gauss have been further developed with the Gauss Godel module.  This has an advanced encrypted warhead using RC4 and the decryption key is not available in the malware itself.  This is in contrast to Stuxnet, Duqu and Flame that used simple XOR masking or byte substitution. This encrypted warhead can only be decrypted on the target system making the malware resistant to detailed analysis. The Gauss module is big enough to contain Stuxnet lake SCADA targeted attacks as well as the currently found information stealing attacks.

The talk also had some great graphics highlighting the structure of the various forms of malware discussed.

Lessons learnt from this research;

–          Current approaches for defending systems from targeted attacks as ineffective

  • Code signing is not bullet proof
  • Virus scanners should have improved heuristics and anomaly detection

–          Coordinating international / global threat mitigation and forensic analysis are challenging problems

  • How do we better share information quickly and while preserving evidence?
  • How do we identify and capture C&C servers quickly?
  • How do we track along the C&C proxy chain?

–          Attackers are using ever more advanced techniques

  • MD5 collision attack in Flame
  • Encrypted payload in Gauss

What can you do to better protect your organisation from similar attacks?

–          Extend protection beyond signature based techniques

  • Anomaly detection – Understand normal use patterns
  • Heuristics
  • Baits, traps, honeypots (I’d say these ones are pretty advanced and likely used by only the most security conscious and savvy organisations)

–          Educate your IT teams to spot and raise anomalies

–          Use Forensics – every organisation should have some forensic capabilities

–          Have an incident response plan, with methods to contact external professionals / experts if required

–          Look into ways to better share information!

It is well worth checking the CrySyS Lab blog for further information on the malware mentioned in this talk, plus many related topics;

Blog.crysys.hu

This talk did a great job of highlighting how one advanced attack inspires many new variants, and how attacks and attackers are becoming ever more advanced and sophisticated.  What is in an advanced, state sponsored attack one day will be used in point and shoot hacking toolkits the next day..

K

Service Technology Symposium Day 2..

Today was the second day of the Service Technology Symposium.  As with yesterday I’ll use this post to review the keynote speeches and provide an overview of that day.  Where relevant further posts will follow, providing more details on some of the days talks.

As with the first day, the day started well with three interesting keynote speeches.

The first keynote was from the US FAA (Federal Aviation Administration) and was titled ‘SOA, Cloud and Services in the FAA airspace system’.  The talk covered the program that is under-way to simplify the very complex National Airspace System (NAS).  This is the ‘system of systems’ that manages all flights in the US and ensures the control and safety of all the planes and passengers.

The existing system is typical of many legacy systems.  It is complex, all point to point connections, hard to maintain, and even minor changes require large regression testing.

Thus a simplification program has been created to deliver SOA, web centric decoupled architecture.  To give an idea of the scale, this program is in two phases with phase one already largely delivered yet the program is scheduled to run through 2025!

as mentioned, the program is split into two segments to deliver capabilities and get buy in from the wider FAA.

–          Segment 1- implemented set of federated services, some messaging and SOA concepts, but no common infrastructure.

–          Segment 2 – common infrastructure – more agile, project effectively creating a message bus for the whole system.

The project team was aided by the creation of a Wiki, and COTS (commercial off the shelf) software repository.

They have also been asked to assess the cloud – there is a presidential directive to ‘do’ cloud computing.  They are performing a benefits analysis from operational to strategic.

Key considerations are that cloud must not compromise NAS,  and that security is paramount.

The cloud strategy is defined, and they are in the process of developing recommendations.  It is likely that the first systems to move to the cloud will be supporting and administrative systems, not key command and control systems.

The second keynote was about cloud interoperability and came from the Open Group.  Much of this was taken up with who the Open Group are and what they do.  Have a look at their website if you want to know more;

http://www.opengroup.org/

Outside of this, the main message of the talk was the need for improved interoperability between different cloud providers.  This would make it easier to host systems across vendors and also the ability of customers to change providers.

As a result improved interoperability would also aid wider cloud adoption – Interoperability is one of the keys to the success of the cloud!

The third keynote was titled ‘The API economy is here: Facebook, Twitter, Netflix and YOUR IT enterprise’.

API refers to Application Programming Interface, and a good description of what this refers to can be found on Wikipedia here;

http://en.wikipedia.org/wiki/Application_programming_interface

The focus of this keynote was that making APIs public and by making use of public APIs businesses can help drive innovation.

Web 2.0 – lots of technical innovation led to web 2.0, this then led to and enabled human innovation, via the game changer that is OPEN API.  Reusable components that can be used / accessed / built on by anyone.  Then add the massive, always on user base of smartphone users into the mix with more power in your pocket than needed to put Apollo on the moon.  The opportunity to capitalise on open APIs is huge.  As an example, there are currently over 1.1 million distinct apps across the various app stores!

Questions for you to consider;

1. How do you unlock human innovation in your business ecosystem?

–          Unlock the innovation of your employees – How can they innovate and be motivated?  How can they engage with the human API?

–          Unlock the potential of your business partner or channel sales community; e.g. Amazon web services – merchants produce, provide and fulfil goods orders, amazon provides the framework to enable this.

–          Unlock the potential of your customers; e.g. IFTTT  (If This Then That) who have put workflow in front of many of the available APIs on the internet.

2. How to expand and enhance your business ecosystem?

–          Control syndication of brand – e.g. facebook ‘like’ button – everyone knows what this is, every user has to use the same standard like button.

–          Expand breadth of system – e.g. Netflix  used to just be website video on demand, now available on many platforms – consoles, mobile, tablet, smart TV, PC etc.

–          Standardise experience – e.g. kindle or Netflix – can watch or read on one device, stop and pick up from the same place on another device.

–          Use APIs to create ‘gravity’ to attract customers to your service by integrating with services they already use – e.g. travel aggregation sites.

This one was a great talk with some useful thought points on how you can enhance your business through the use of open APIs.

On this day I fitted in 6 talks and one no show.

These were;

Talk 1 – Cloud computing’s impact on future enterprise architectures.  Some interesting points, but a bit stuck in the past with a lot of focus on ‘your data could be anywhere’ when most vendors now provide consumers the ability to ensure their data remains in a specific geographical region.  I wont be prioritising writing this one up so it may or may not appear in a future post.

Talk 2 – Using the cloud in the Enterprise Architecture.  This one should have been titled the Open Group and TOGAF with 5 minutes of cloud related comment at the end.  Another one that likely does not warrant a full write up.

Talk 3 – SOA environments are a big data problem.  This was a brief talk but with some interesting points around managing log files, using Splunk and ‘big data.  There will be a small write up on this one.

Talk 4 – Industry orientated cloud architecture (IOCA).  This talk covered the work Fulcrum have done with universities to standardise on their architectures and messaging systems to improve inter university communication and collaboration.  This was mostly marketing for the Fulcrum work and there wasn’t a lot of detail, this is unlikely to be written up further.

Talk 5  – Time for delivery: Developing successful business plans for cloud computing projects.  This was a great talk with a lot of useful content.  It was given by a Cap Gemini director so I expected it to be good.  There will definitely be a write up of this one.

Talk 6 – Big data and its impact on SOA.  This was another good, but fairly brief one, will get a short write up, possibly combined with Talk 3.

And there you have it that is the overview of day two of the conference.  Looks like I have several posts to write covering the more interesting talks from the two days!

As a conclusion, would I recommend this conference?  Its a definite maybe.  Some of the content was very good, some either too thin, or completely focussed on advertising a business or organisation.  The organisation was also terrible with 3 talks I planned to attend not happening and the audience totally left hanging rather than being informed the speaker hadn’t arrived.

So a mixed bag, which is a shame as there were some very good parts, and I managed to get 2 free books as well!

Stay tuned for some more detailed write ups.

K

Consumerism of IT 2..

Following from my previous post covering briefly what consumerism of IT and Bring Your Own Device (BYOD) are, I’ll now cover some of the things these trend mean for ICT departments.

For any IT business or IT department that thinks they do not need to consider the impacts of consumerism and BYOD – Think again!  Regardless of perceived business benefits such as cost savings or flexibility, or even the side benefits around the improved security and management of utilising VDI to centralise business owned user computing resources, as BYOD becomes more mainstream it will become and expected benefit / perk rather than the exception.

As an example of how this is already becoming more mainstream; several large companies such as IBM and Citrix are embracing this trend and have well established BYOD programs.

Ask yourself, do you want to attract the best talent? If the answer is yes then you need to ensure the working environment you offer is up there with the best of your competitors.  This includes offering things like BYOD programs across mobiles, tablets, laptops etc. and / or offering a wider variety of consumer type devices such as tablets and smartphones.

The challenge, as is often the case, will be to understand how these changes and trends can be harnessed to provide both business benefits and create an attractive working environment while still ensuring the security of your and your customers data and maintaining a stable and manageable ICT estate.

BOYD and consumerism of IT can and will make sweeping changes to how IT departments manage and provision user devices.  Whether this is due to supporting a wider variety of devices directly, or from relinquishing some control and embarking on a BYOD program, there will be changes.  What they are will depend on the route your company takes and how mature your company currently regarding technology such as desktop virtualisation and offering functionality via web services.  If you currently have little or no VDI type solution and most of your application access is via thick or dedicated client software the changes are likely to prove very challenging.  On the other hand, if you are at the other end of the scale with a large and mature VDI (Virtual Desktop Infrastructure) deployment along with most applications and processes being accessed via a browser, then the transition to more consumer or BYOD focussed end user IT will likely be relatively straight forward from a technical standpoint.

Without sounding like a broken record (well hopefully) the first thing you need to do before embarking on any sort of BYOD program is to get the right policies and procedures in place to ensure company data remains safe and that there are clear and agreed rules for how any devices can be used, how they can access data, how access, authentication and authorisation are managed, along with the companies requirements around things like encryption and remote wipe capabilities.

NIST (National Institute of Standards and Technology) have recently released an updated draft policy around the managing and securing mobile devices such as smartphones and tablets.  This policy covers both company owned (Consumerism) and user owned (BYOD) devices.  This can be used as a great starting point for the creation of your own policies.  It’s worth noting that NIST highlights BYOD as being more risky than company owned devices even when the devices are the same.  The policy draft can be found here;

http://csrc.nist.gov/publications/drafts/800-124r1/draft_sp800-124-rev1.pdf

Once you have the policies in place you will need to assess the breadth of the program, this must include areas such as;

–         Will you allow BYOD, or only company supplied and owned equipment

–         Which devices are allowed

–         Which O/Ss and applications are permitted; this should include details of O/S minor versions and patch levels etc.

–         How will patching of devices and applications be managed and monitored

–         What levels of access will the users and devices be permitted

–         What architectural changes are required to the environment in order to manage and support the program

–         How will licenses be managed and accounted for

–         What are the impacts to everything from the network (LAN, WAN and internet access) to applications and storage to desk space (will users have more or less devices on their desks) to the provision of power (will there be more devices and chargers etc. on the floors)

This is by NO means an exhaustive list, the point of these posts is to get you thinking about what is coming along, and whether your company will embrace BYOD and the consumerism of IT.

CIO.com recently ran an article titled ‘7 Tips for Establishing a Successful BYOD Policy’ that covers some similar points and is worth a read;

http://www.cio.com/article/706560/7_Tips_for_Establishing_a_Successful_BYOD_Policy

There are several useful links from the CIO article that are also worth following.

It would be great to hear your thoughts and experiences on the impacts of consumerism and BYOD.

K

Hackers outwit on-line banking security

If you ever doubted either the inventiveness of criminals, or the need for taking sensible security precautions this story should be a wake up call;

http://www.bbc.co.uk/news/technology-16812064

Hackers have developed ‘Man in the Browser’ attacks that potentially allow them to circumvent even the relatively new 2-factor chip and pin security many banks now implement.  These attacks also have the potential to at least temporarily evade protection such as AV software and any blacklists as they will redirect to new sites that are not yet known by security firms.

In short stay vigilant, keep your computer(s) protected and up to date, and always use security software such as anti virus etc.  And as documented by Bruce Schneier several years ago we need to look at authenticating each transaction.

K

ISEB Enterprise and Solutions Architecture – update

Following from my previous post I can confirm that the exam was pretty easy having got a pretty reasonable passing mark after completing the exam in ~25 minutes.

I have yet to see many job specs that require this certification so I don’t know how CV enhancing it really is.  However many job specs want knowledge of or familiarity with architecture frameworks such as TOGAF and Zachman, if you are not already fairly familiar with these then this course does provide a good overview and comparison of some frameworks.

Overall my assessment of the course / exam is as before – I think well worth while from the point of view of getting an overview of various architecture frameworks and the terminologies used, as well as meeting people from a variety of business backgrounds.  This should assist with any requirement for knowledge of architecture frameworks / methodologies your current or future roles have.  The caveat in terms of career value is that the certification itself seems to be in very low demand.

K