Cheap IOPS, Expensive Gigabytes…

Recently we implemented a fast storage solution to meet the needs of a growing (and horribly non-relational and un-normalised) database.  While actually a very simple solution from an architectural standpoint it was one of those products that performs exactly as advertised and has impressed us immensely with it’s performance, so I wanted to briefly write about it should anyone reading this need a fast server based storage solution.

The products in question are from a company called Fusion-IO, and are called ioDrives, the same drives are also available through HP who re-brand them as Accelerator IO cards.

The title of this post was actually taken from a conversation with one of the guys from Fusion-IO when we were evaluating the performance of their cards.  He highlighted that what they effectively do is ‘make IOPS cheap and Gigabytes expensive’.

IOPS = Input Output operations Per Second – basically how many times the device (hard disk / SSD / Array) can read or write to itself per second.

Reading the performance statistics for the cards provided some very impressive statistics, but obviously we wanted to prove these for ourselves, in our environment with our hardware and simulation of our typical workload (the mix of reads and writes we typically see from the application).

The cards used in our testing and subsequently our production environment are the 640GB MLC cards, details of which can be found here;

http://www.fusionio.com/products/iodriveduo/

We tested the performance in an HO DL580 G7 with 4 * 8 core Xeon CPUs @ 2.26GHz and 128GB RAM, using the SQLIO Disk Subsystem Benchmark Tool from Microsoft.

The results certainly met our expectations, we saw >90,000 IOPs from a mixed read / write configuration via the SQLIO tool in our real world scenario.  This is particularly impressive given the use of very ‘normal’ off the shelf hardware components.

We were comfortable enough with the results that we recommended the use of these cards for a critical production system in a mirrored (RAID 1 across 2 cards) configuration, and as mentioned have since implemented this solution with great success.

It is worth noting that the cards do have a considerable amount of built in resilience and redundancy so not all implementations would require mirrored cards, in fact according to the vendor most implementations are not RAID 1 and they rarely if ever see any issues.

Before signing off this post I should mention my college Ben Cox who is our local DBA extraordinaire as he actually ran the tests and documented the outputs.

K

Leave a Reply

Your email address will not be published. Required fields are marked *