Puppet introduction

Puppet is currently being deployed in the environment where I work, so I thought it would be a good idea to get at least slightly up to speed around how it works.  Like I am sure quite a few of you I am familiar with Puppet in terms of what it is and what it is commonly used for, in terms of it being an IT automation tool written in Ruby that can manage both *nix and Windows systems.

I didn’t however know much of the detail around exactly how it works and can be configured.  Given that there are probably others in a similar position who either need or want to learn a bit more about Puppet and system management and automation I thought I’d share a couple of the better introductory resources I found.

If you are completely new to Puppet and want to find out what it does the ‘What is Puppet page is an excellent starting point;

https://puppetlabs.com/puppet/what-is-puppet/

The next link is a good introduction to coding with Puppet and nicely covers the fact Puppet is Declarative.  This can be a challenge for some people especially those with coding experience as most languages are Imperative which is quite a different style of explaining what you want the application to do.  Read on to find out more;

http://spin.atomicobject.com/2012/09/13/from-imperative-to-declarative-system-configuration-with-puppet/

I also found this three part series that covers what you need to set up and get running with Puppet with the minimum of extra information.  This is great if you need to get up and running quickly as much of the full documentation is more book like;

Part 1;

http://justfewtuts.blogspot.co.uk/2012/05/puppet-beginners-concept-guide-part-1.html

Part 2;

http://justfewtuts.blogspot.co.uk/2012/07/puppet-beginners-concept-guide-part-2.html

Part 3;

http://justfewtuts.blogspot.co.uk/2012/08/puppet-beginners-concept-guide-part-3.html

Finally if you want a full understanding of Puppet and have the time the Puppet Labs documentation is excellent and should remove any need to buy a reference book;

http://docs.puppetlabs.com/

K

 

Cloud Security Alliance Congress Orlando 2012 pt4

Keynote day 2 – panel discussion around ‘Critical Infrastructure, National Security and the Cloud.

Discussions around the role of ISPs in protecting the US from attacks, e.g. by dropping / blocking IP addresses / blocks of IP addresses from which attacks such as DDoS are originating from.

Should they be looking more deeply into packets in order to prevent attacks?  What does this mean for net neutrality and freedom?

How does this apply to Cloud service providers (CSPs)?  What happens when the CSP is subpoenaed by the courts / government to hand over data?  This is another reason why you should encrypt your data in the cloud and ensure you manage the keys.  This means the court / government has to directly subpoena you as the data owner and give you the opportunity to argue your case if they want access to your data.

Should the cloud be defined as critical infrastructure, if so which parts, which providers etc.  Will need to clearly define what means critical infrastructure when discussing the cloud.

Next discussion point was China;  Continuous economic growth means we are more and more involved in trade with China, however they are also stealing huge amounts of proprietary data across multiple industries and literally stealing all of their manufacturing data to copy what is made and how.  According to some vendor reports 95% of all internet based theft of intellectual property comes from China.  This is both from Chinese governmental bodies, and Chinese corporations.

Look up Internet Security Alliance documentation around securing, monitoring and understanding your global manufacturing supply chain.  This document has been strongly resisted by both Chinese Government and companies.  There is a clear need to protect sensitive information and work to reduce global supply chain risk.  Us Government working on constant monitoring capabilities to help corporations monitor their global supply chains.

Proposed that IP theft should be on the agenda for the G20 next year.  Also proposed the US and other countries should have an industrial policy, if they don’t already, that allows the military and intelligence communities to defend corporations and systems that are deemed part of the critical infrastructures.

Counterfeiting is also moving into cyberspace, what do we do with counterfeit infrastructure or counterfeit clouds?

————

A practical, step by step approach to implementing a private cloud

Preliminary points – have you ever decommissioned a security product?  How many components / agents does the “AV” software on your laptop now have?

Why is security not the default?

Why would you not just put everything in the public cloud? – Risk, Compliance – you cannot outsource responsibility!

This is where ‘private cloud’ options come into play.  Could also consider ‘Virtual private cloud’ – this is where VPN technology is used to create what is effectively a private cloud on public cloud infrastructure..

Many organisations have huge spare server capacity – typical results find 80% of servers only used at 20% capacity.  You can create internal elasticity by making this spare capacity part of an internal, private cloud.

5 steps to a private cloud;

  1. Identify a business need– what is your cloud driver?  What will benefit from;
    1.  Greater agility
    2. Increased speed to develop and release,
    3. Elastic processes that vary greatly over time such as peak shopping days, or month end processing etc.
    4. DevOps
    5. Testing
    6. Rapid prototyping

2. Assess your current infrastructure – is there excess capacity?  Is the hardware virtualisation ready?  Can your existing infrastructure scale? (Note that a cloud can be physical, not virtual if this is required).  Is new cloud infrastructure needed?  What are your storage requirements?  What are your data recovery and portability requirements?  How will you support a private cloud with your existing security tools and processes (e.g. where do you plug in your IPS?) – are your processes robust and scalable? – can you monitor at scale?  Can you manage change at scale?

3. Define your delivery strategy – who are your consumers? Developers.  Administrators. General employees. Other?  Competency level of consumers defines the delivery means. (e.g. developers and admins may get CLI, General employees may get the ‘one click’ web portal).  Delivery mechanism matters!  Create a service catalogue.  Ensure ‘Back end services’ are in place

4. Transformation – You cannot forklift into the cloud – legacy applications that do not scale horizontally will not work.  More resources != greater performance.  Need to design in scale and security.  Modernise code and frameworks.   Re-test – simulate cloud scale and failures.  Re-think automation, scale.

5. Operationalize – Think about complete service life-cycle – deployment to destruction.  Resilience.  Where does security fit into this? – Everywhere! – whether applications or services.  Secure design from the ground up – embed into architecture and design – then security no longer on the critical path to deployment!

Overall this was an entertainingly presented talk that was a little light on detail / content, but I thing the 5 points are worth bearing in mind if you are thinking or implementing a private cloud in your organisation.

—————

Cloud security standards;

Talk over-viewing some of the current standards relating to cloud security.  Below is a list of some of the cloud security standards / controls / architectures / guidance that you should aware of if you are working with or planning to work with any sort of public cloud solution.

ITU – 

–          Cloud Security Reference Architecture

–          Cloud security framework

–          Guidelines for operational security

–          Identity management of Cloud computing

ISO  –

–          27017 – guidelines on information security controls for the use of cloud computing services based on ISO/IEC 27002 2

–          27036-4 – Supply chain security: Cloud

–          27040 – Storage security

–          27018 – Code of practice for data protection controls for public cloud computing services

–          SC7 – Cloud governance

–          SC38

–          Controls for cloud computing security

–          Additional controls for 27001 compliance in the cloud

–          Implementation guidance for controls

–          Data protection implementation guidance

–          Supply chain guidance

NIST – 

–          800-125 – Guide to security for full virtualisation technologies

–          800-144 – Guidelines on security and privacy in public cloud computing

–          NIST cloud reference architecture

OAISIS – 

–          Identity in the Cloud

ODCA (Open Data Center Alliance) – 

–          Provider assurance usage model

–          Security monitoring usage model

–          RFP requirements

CSA – 

–          Cloud Controls matrix

–          Trusted cloud infrastructure

–          Security as a Service

–          Cloud trust protocol

–          Guidance document

The CSA Cloud Controls Matrix maps many of these standards to cloud control areas with details of the specification and the standard components each specification meets / relates to.

While a pretty dry topic, this is a useful reference list if you are looking for more information on cloud / cloud security related standards and guidance.

K

 

An Awarding Week!

I had planned a wrap up post around my thoughts from the RSA conference for this week, but it has been a very busy and surprisingly rewarding week..  A combination of some University coursework due Monday and some great news have meant little time for writing (well non university writing anyway).  There will still be a wrap up for the RSA, likely early next week, but I wanted to share some exciting news relating to the Security as a Service working group I help lead for the Cloud Security Alliance (CSA).

I found out this week that the CSA are giving me an award for the volunteer work I have done for them over the last year or so.  They are also assisting with getting me to their congress in Orlando from the 6th to 9th November, so I’ll be packing my bags and jetting off to the US for a few days!

The award is called the Ron Knode Service Award in honour of one of the early members of the CSA who passed away earlier this year.  For me this is a great piece of recognition as it is the first year these awards have been given out, and of the ~40000 members of the CSA, only 6 people have been recognised with this award!

Rather than continue on about it myself I thought I would include the emails I was sent confirming the reward as they probably cover if better than I could;

The first was from  Luciano (J.R.) Santos the CSA’s Global Research Director –

Dear Kevin,

It is my great pleasure to inform you that you have been selected to receive the 1st Annual Ron Knode Service Award recognizing excellence in volunteerism. On behalf of the Cloud Security Alliance, I would like to congratulate you on receiving this award for the EMEA Region.  Ron Knode was a information security expert and member of the Cloud Security Alliance family, who passed away on May 31, 2012. Ron was an innovative thinker and the author of the CSA Cloud Trust Protocol. Ron was a cherished member of CSA, with endless energy and humor to guide his volunteer contributions.  In Ron’s memory, the Cloud Security Alliance in 2012 instituted the annual Ron Knode Service Award, recognizing excellence in volunteerism for 6 honorees from the Americas, Asia-Pacific and EMEA regions.

At this time, the ceremonies are being planned, but exact dates and locations have not been confirmed.   Daniele will be in touch with you when additional details become available.  In the meantime, if you have any questions please don’t hesitate to contact me or Daniele.  Warmest thanks for all of your hard work and outstanding contributions as a member of the Cloud Security Alliance.  We recognize how much time and energy you put into our organization, and we deeply appreciate all of your efforts.  

 We are thrilled to present you with this award.  Our PR Manager Kari Walker will be reaching out to you as we put together a press release officially announcing the winners.  In addition, we’ll need you to send a current photo and bio to our webmaster Evan Scoboria.  Evan will be creating a section on the CSA main site honoring the winners of this award.  We value your volunteer contributions and believe that the devotion of volunteers like you will continue to lead CSA into the future.  Congratulations on a job well done!

 Best Regards,

 Luciano (J.R.) Santos

CSA Global | Research Director

———

The second email was from Jim Reavis, the CSA Executive Director

Thank you all for your efforts.  To narrow this list down to 6 globally
was a major chore and you should be proud. Volunteerism for the common
good is among the highest callings in our industry, and the CSA family
appreciates your outstanding contributions.  Please let us know if there
is anything that CSA can do for you.  As we continue to grow, we look
forward to working together and being able to do even more for you.

Best Regards,

Jim Reavis
Executive Director, Cloud Security Alliance

———

As you may have guessed, I am extremely pleased to be receiving this award, it really has helped make the work worthwhile, on top of the satisfaction of seeing it all published of course!

for those of you going to the CSA congress I look forward to seeing / meeting you in a couple of weeks, for everyone else, watch this space for the RSA conference wrap up and further writings on security and architecture.

K

RSA Conference Europe 2012 – They’re inside… Now what?

Eddie Schwartz – CISO, RSA and Uri Rivner – Head of cyber strategy, Biocatch

Talk started with some discussion around general Trojan attacks against companies, rather than long term high tech APTs, with the tagline; If these are random attacks.. We’re screwed!

Worth checking the pitch, but there was a series of examples from the RSA lab in Israel of usernames and passwords and other data that Trojans had sent to C&C servers in Russia.  These included banks, space agencies, science agencies, nuclear material handling companies etc.

So what to the controllers of these Trojans do with the data?  Remember these are random attacks collecting whatever personal data they can get, not specific targeted attacks.  A common example is to sell the data, you can find examples of the criminals on message boards etc. offering banking, government and military credentials for sale.

Moving onto examples of specifically targeted attacks and APTs..  Examples of targeted attacks include; Ghostnet, Aurora, Night Dragon, Nitro and Shady RAT.  These have attacked everything from large private companies, to critical infrastructures to the UN.  All of the given examples had one thing in common – Social Engineering.  Every one used Spear Phishing as their entry vector.

From this I think you need to consider – Do you still think security awareness training shouldn’t be high on your organisations to-do list?

The talk went onto discuss Stuxnet and Duqu, along with their similarities and differences, largely what was captured in my last post.  The interesting observation here was their likely different plaes in the attack process.  Stuxnet was at the end and the actual attack, Duqu likely much earlier in the process as it was primarily for information gathering.

A whole lot more targeted malware examples were given including Jimmy, Munch, Snack, Headache etc.  Feel free to look these up if you want to do some further research.

A very recent example of a targeted attach that was only discovered in July of this year is VOHO.  This campaign was heavily targeted on Geopolitical and defence targets in Boston, Washington and New York.  It was a multistage campaign heavily reliant on Javascript.  While focused on specific target types the attack was very broad, hitting over 32000 unique hosts and successfully infections nearly 4000.  This is actually a very good success rate, with the campaign no doubt considered a success by those instigating it..

In light of this evidence it is clear we need a new security doctrine.  You will get hacked despite your hard work, if it has not yet happened, it will..  Learn from the event, an honest evaluation of faults and gaps should result in implements.

Things to consider as part of this new doctrine;

–          Resist – Threat resistant virtualisation, Zero day defences

–          Detect – Malware traces, Big data analytics, behavioural profiling

–          Investigate – Threat analysis, Forensics and reverse engineering

–          Cyber Intelligence – Threat and Adversary intelligence

Cyber Intelligence was covered in some more specific details around how we can improve this;

–          External visibility – Industry / sector working groups, Government, trusted friends and colleges, vendor intelligence;

  • Can this information be quickly accessed?  For speed should be in machine readable format, but use whatever works!

–          Internal visibility – Do you have visibility in every place it it needed, HTTP, email, DNS, sensitive data etc.

  • Do you have the tools in place to make use of and analyse all of these disparate data sources

–          Can you identify when commands like NET.. and schedulers etc. are being used?

–          Do you have visibility of data exfiltration, scripts running, PowerShell, WMIC (Windows Management Instrumentation Command-line) etc?

–          Do you have the long term log management and correlation in place to put all the pieces of these attacks together?

Summary recommendations and call to action..

–          Assume you are breached on a daily basis and focus on adversaries, TTPs and their targets

–          Develop architecture and tools for internal and external intelligence for real-time and post-facto visibility into threats

–          Understand current state of malware, attack trends, scenarios, and communications

–          Adjust security team skills and incident management work flow

–          Learn from this and repeat the cycle..

Next steps (call to action!);

–          Evaluate your defence posture against APTs, and take the advice from the rest of this post

–          Evaluate your exposure to random intrusions (e.g. data stealing Trojans), and take the advice from the rest of this post

Useful presentation from a technical and security team standpoint, but completely missed the human and security awareness training aspect – despite highlighting that all the example APTs used spear phishing to get in the door.  I’d recommend following all the advice of this talk and then adding a solid security awareness program for all employees and really embedding this into the company philosophy / culture.

K

RSA Conference Europe 2012 – Duqu, Flame, Gauss: Followers of Stuxnet

Boldizsar Bencsath, CrySys Lab

Stuxnet – 2010 – modified PLCs (Programmable Logic Controllers) in uranium enrichment facilities.  Most likely government backed and dubbed ‘the most menacing malware in history’ by wired magazine.

Duqu – discovered by CrySys Lab in the wild when responding to an incident.  Stuxnet destroyed Iranian centrifuges, Duqu is for information gathering.

However they are very similar in terms of design philosophy, internal structure and mechanisms, implementation details and the effort that would have been required to create them.  Additionally Duqu also used a digitally signed driver as with Stuxnet.

Duqu named as it creates temp files starting with the string ~DQ.

Actual relationship between the two and who created Duqu is not known, but suspected that Stuxnet creators at least had some involvement in creating Duqu.

Duqu is a very clean design that automatically downloaded only the modules it required from Command and Control (C&C) servers.  Thus investigators do not know the full extent of its capabilities as they can only see the modules that were downloaded to the targets they investigated.  The Duqu C&C servers may have hosted the Stuxnet PLC code for example.

The components of Duqu that were discovered included;

–          Registry data to point to components

–          Keyloggers

–          Multiple encrypted payloads

–          Pointers to how to decrypt the payloads

–          Of note different payloads were encrypted with different methods

From a CrySys Lab viewpoint they;

–          Discovered and named Duqu

–          Freely shared thei knowledge with AV vendors and Microsoft

–          Identified the dropper

–          Developed the Duqu detector toolkit

  • Focusing on heuristic anomaly detection
    • AV tools already have basic signature detection so no reason to duplicate this
  • Detects live Duqu instances and remnants of old ones
  • Also detects Stuxnet
  • Open source for anyone to use

Moving into 2012 another variant / descendant of Stuxnet / Duqu has been discovered.  This is known as Flame / Flamer / sKyWIper.  Flame has been described as the ‘most complex malware ever found’, its core component is 6MB in size.

Flame appears to follow the same main requirements / specifications to Duqu and Stuxnet, but has been developed in a very different way, using different programming languages etc.  Flame is another information stealer malways with functionality such as;

–          activating microphones and web cameras

–          logging key strokes

–          taking screen shots / screen scraping

–          extracting geolocation data from images

–          sending and receiving commands and data through Bluetooth, including enabling bluetooth when it is turned off

Flame infects computers my masquerading as a proxy for windows and has infected 1000s of victims mostly across Iran and the Middle East.

Gauss is another information stealing malware example that is based on the Flame platform.  This was also discovered in 2012, but infections date back to September 2011, again 1000s of victims, mainly in Lebanon, Israel and the Palestinian Territory.

Gauss have been further developed with the Gauss Godel module.  This has an advanced encrypted warhead using RC4 and the decryption key is not available in the malware itself.  This is in contrast to Stuxnet, Duqu and Flame that used simple XOR masking or byte substitution. This encrypted warhead can only be decrypted on the target system making the malware resistant to detailed analysis. The Gauss module is big enough to contain Stuxnet lake SCADA targeted attacks as well as the currently found information stealing attacks.

The talk also had some great graphics highlighting the structure of the various forms of malware discussed.

Lessons learnt from this research;

–          Current approaches for defending systems from targeted attacks as ineffective

  • Code signing is not bullet proof
  • Virus scanners should have improved heuristics and anomaly detection

–          Coordinating international / global threat mitigation and forensic analysis are challenging problems

  • How do we better share information quickly and while preserving evidence?
  • How do we identify and capture C&C servers quickly?
  • How do we track along the C&C proxy chain?

–          Attackers are using ever more advanced techniques

  • MD5 collision attack in Flame
  • Encrypted payload in Gauss

What can you do to better protect your organisation from similar attacks?

–          Extend protection beyond signature based techniques

  • Anomaly detection – Understand normal use patterns
  • Heuristics
  • Baits, traps, honeypots (I’d say these ones are pretty advanced and likely used by only the most security conscious and savvy organisations)

–          Educate your IT teams to spot and raise anomalies

–          Use Forensics – every organisation should have some forensic capabilities

–          Have an incident response plan, with methods to contact external professionals / experts if required

–          Look into ways to better share information!

It is well worth checking the CrySyS Lab blog for further information on the malware mentioned in this talk, plus many related topics;

Blog.crysys.hu

This talk did a great job of highlighting how one advanced attack inspires many new variants, and how attacks and attackers are becoming ever more advanced and sophisticated.  What is in an advanced, state sponsored attack one day will be used in point and shoot hacking toolkits the next day..

K

Service Technology Symposium Day 2..

Today was the second day of the Service Technology Symposium.  As with yesterday I’ll use this post to review the keynote speeches and provide an overview of that day.  Where relevant further posts will follow, providing more details on some of the days talks.

As with the first day, the day started well with three interesting keynote speeches.

The first keynote was from the US FAA (Federal Aviation Administration) and was titled ‘SOA, Cloud and Services in the FAA airspace system’.  The talk covered the program that is under-way to simplify the very complex National Airspace System (NAS).  This is the ‘system of systems’ that manages all flights in the US and ensures the control and safety of all the planes and passengers.

The existing system is typical of many legacy systems.  It is complex, all point to point connections, hard to maintain, and even minor changes require large regression testing.

Thus a simplification program has been created to deliver SOA, web centric decoupled architecture.  To give an idea of the scale, this program is in two phases with phase one already largely delivered yet the program is scheduled to run through 2025!

as mentioned, the program is split into two segments to deliver capabilities and get buy in from the wider FAA.

–          Segment 1- implemented set of federated services, some messaging and SOA concepts, but no common infrastructure.

–          Segment 2 – common infrastructure – more agile, project effectively creating a message bus for the whole system.

The project team was aided by the creation of a Wiki, and COTS (commercial off the shelf) software repository.

They have also been asked to assess the cloud – there is a presidential directive to ‘do’ cloud computing.  They are performing a benefits analysis from operational to strategic.

Key considerations are that cloud must not compromise NAS,  and that security is paramount.

The cloud strategy is defined, and they are in the process of developing recommendations.  It is likely that the first systems to move to the cloud will be supporting and administrative systems, not key command and control systems.

The second keynote was about cloud interoperability and came from the Open Group.  Much of this was taken up with who the Open Group are and what they do.  Have a look at their website if you want to know more;

http://www.opengroup.org/

Outside of this, the main message of the talk was the need for improved interoperability between different cloud providers.  This would make it easier to host systems across vendors and also the ability of customers to change providers.

As a result improved interoperability would also aid wider cloud adoption – Interoperability is one of the keys to the success of the cloud!

The third keynote was titled ‘The API economy is here: Facebook, Twitter, Netflix and YOUR IT enterprise’.

API refers to Application Programming Interface, and a good description of what this refers to can be found on Wikipedia here;

http://en.wikipedia.org/wiki/Application_programming_interface

The focus of this keynote was that making APIs public and by making use of public APIs businesses can help drive innovation.

Web 2.0 – lots of technical innovation led to web 2.0, this then led to and enabled human innovation, via the game changer that is OPEN API.  Reusable components that can be used / accessed / built on by anyone.  Then add the massive, always on user base of smartphone users into the mix with more power in your pocket than needed to put Apollo on the moon.  The opportunity to capitalise on open APIs is huge.  As an example, there are currently over 1.1 million distinct apps across the various app stores!

Questions for you to consider;

1. How do you unlock human innovation in your business ecosystem?

–          Unlock the innovation of your employees – How can they innovate and be motivated?  How can they engage with the human API?

–          Unlock the potential of your business partner or channel sales community; e.g. Amazon web services – merchants produce, provide and fulfil goods orders, amazon provides the framework to enable this.

–          Unlock the potential of your customers; e.g. IFTTT  (If This Then That) who have put workflow in front of many of the available APIs on the internet.

2. How to expand and enhance your business ecosystem?

–          Control syndication of brand – e.g. facebook ‘like’ button – everyone knows what this is, every user has to use the same standard like button.

–          Expand breadth of system – e.g. Netflix  used to just be website video on demand, now available on many platforms – consoles, mobile, tablet, smart TV, PC etc.

–          Standardise experience – e.g. kindle or Netflix – can watch or read on one device, stop and pick up from the same place on another device.

–          Use APIs to create ‘gravity’ to attract customers to your service by integrating with services they already use – e.g. travel aggregation sites.

This one was a great talk with some useful thought points on how you can enhance your business through the use of open APIs.

On this day I fitted in 6 talks and one no show.

These were;

Talk 1 – Cloud computing’s impact on future enterprise architectures.  Some interesting points, but a bit stuck in the past with a lot of focus on ‘your data could be anywhere’ when most vendors now provide consumers the ability to ensure their data remains in a specific geographical region.  I wont be prioritising writing this one up so it may or may not appear in a future post.

Talk 2 – Using the cloud in the Enterprise Architecture.  This one should have been titled the Open Group and TOGAF with 5 minutes of cloud related comment at the end.  Another one that likely does not warrant a full write up.

Talk 3 – SOA environments are a big data problem.  This was a brief talk but with some interesting points around managing log files, using Splunk and ‘big data.  There will be a small write up on this one.

Talk 4 – Industry orientated cloud architecture (IOCA).  This talk covered the work Fulcrum have done with universities to standardise on their architectures and messaging systems to improve inter university communication and collaboration.  This was mostly marketing for the Fulcrum work and there wasn’t a lot of detail, this is unlikely to be written up further.

Talk 5  – Time for delivery: Developing successful business plans for cloud computing projects.  This was a great talk with a lot of useful content.  It was given by a Cap Gemini director so I expected it to be good.  There will definitely be a write up of this one.

Talk 6 – Big data and its impact on SOA.  This was another good, but fairly brief one, will get a short write up, possibly combined with Talk 3.

And there you have it that is the overview of day two of the conference.  Looks like I have several posts to write covering the more interesting talks from the two days!

As a conclusion, would I recommend this conference?  Its a definite maybe.  Some of the content was very good, some either too thin, or completely focussed on advertising a business or organisation.  The organisation was also terrible with 3 talks I planned to attend not happening and the audience totally left hanging rather than being informed the speaker hadn’t arrived.

So a mixed bag, which is a shame as there were some very good parts, and I managed to get 2 free books as well!

Stay tuned for some more detailed write ups.

K

Consumerism of IT 2..

Following from my previous post covering briefly what consumerism of IT and Bring Your Own Device (BYOD) are, I’ll now cover some of the things these trend mean for ICT departments.

For any IT business or IT department that thinks they do not need to consider the impacts of consumerism and BYOD – Think again!  Regardless of perceived business benefits such as cost savings or flexibility, or even the side benefits around the improved security and management of utilising VDI to centralise business owned user computing resources, as BYOD becomes more mainstream it will become and expected benefit / perk rather than the exception.

As an example of how this is already becoming more mainstream; several large companies such as IBM and Citrix are embracing this trend and have well established BYOD programs.

Ask yourself, do you want to attract the best talent? If the answer is yes then you need to ensure the working environment you offer is up there with the best of your competitors.  This includes offering things like BYOD programs across mobiles, tablets, laptops etc. and / or offering a wider variety of consumer type devices such as tablets and smartphones.

The challenge, as is often the case, will be to understand how these changes and trends can be harnessed to provide both business benefits and create an attractive working environment while still ensuring the security of your and your customers data and maintaining a stable and manageable ICT estate.

BOYD and consumerism of IT can and will make sweeping changes to how IT departments manage and provision user devices.  Whether this is due to supporting a wider variety of devices directly, or from relinquishing some control and embarking on a BYOD program, there will be changes.  What they are will depend on the route your company takes and how mature your company currently regarding technology such as desktop virtualisation and offering functionality via web services.  If you currently have little or no VDI type solution and most of your application access is via thick or dedicated client software the changes are likely to prove very challenging.  On the other hand, if you are at the other end of the scale with a large and mature VDI (Virtual Desktop Infrastructure) deployment along with most applications and processes being accessed via a browser, then the transition to more consumer or BYOD focussed end user IT will likely be relatively straight forward from a technical standpoint.

Without sounding like a broken record (well hopefully) the first thing you need to do before embarking on any sort of BYOD program is to get the right policies and procedures in place to ensure company data remains safe and that there are clear and agreed rules for how any devices can be used, how they can access data, how access, authentication and authorisation are managed, along with the companies requirements around things like encryption and remote wipe capabilities.

NIST (National Institute of Standards and Technology) have recently released an updated draft policy around the managing and securing mobile devices such as smartphones and tablets.  This policy covers both company owned (Consumerism) and user owned (BYOD) devices.  This can be used as a great starting point for the creation of your own policies.  It’s worth noting that NIST highlights BYOD as being more risky than company owned devices even when the devices are the same.  The policy draft can be found here;

http://csrc.nist.gov/publications/drafts/800-124r1/draft_sp800-124-rev1.pdf

Once you have the policies in place you will need to assess the breadth of the program, this must include areas such as;

–         Will you allow BYOD, or only company supplied and owned equipment

–         Which devices are allowed

–         Which O/Ss and applications are permitted; this should include details of O/S minor versions and patch levels etc.

–         How will patching of devices and applications be managed and monitored

–         What levels of access will the users and devices be permitted

–         What architectural changes are required to the environment in order to manage and support the program

–         How will licenses be managed and accounted for

–         What are the impacts to everything from the network (LAN, WAN and internet access) to applications and storage to desk space (will users have more or less devices on their desks) to the provision of power (will there be more devices and chargers etc. on the floors)

This is by NO means an exhaustive list, the point of these posts is to get you thinking about what is coming along, and whether your company will embrace BYOD and the consumerism of IT.

CIO.com recently ran an article titled ‘7 Tips for Establishing a Successful BYOD Policy’ that covers some similar points and is worth a read;

http://www.cio.com/article/706560/7_Tips_for_Establishing_a_Successful_BYOD_Policy

There are several useful links from the CIO article that are also worth following.

It would be great to hear your thoughts and experiences on the impacts of consumerism and BYOD.

K