HostPresto! recently upgraded the bulk of it's services from the centralised Storage Area Network (SAN) solution to â€˜old school' direct attached storage (DAS) Solid State Disks.
While at first this seems like a step backwards (for reasons we'll discuss below) after some careful consideration the advantages of going the DAS route become clear, and far outweigh the advantages of the traditional SAN.
It's worth pointing out at this stage that it's only recently, thanks to the advantages in the quality, capacity and performance of SSDs that it's now possible to leave behind the SAN architecture and return to direct attached disks. Traditional magnetic media - even enterprise grade 15krpm SAS drives could not provide the IOPS (input / output operations per second) to cope with high work loads in small quantities, hence they would be pooled together in their hundreds in a SAN to combine the IOPS.
For example, lets assume we have a host which is providing VPS services to clients. This server has the capacity to be host to 64 Virtual Machines. The server has the space to house 6 x 2.5" hard disk drives. Before SSDs you would drop in 6 x 15Krpm 600GB disk drives, in RAID 10 providing 1.8TB of storage, and the combined, rather low IOPS of all 6 disks. If several of the 64 VPSs hosted on that server were doing intensive read/write workloads, you would find it would get bogged down very quickly, and your clients wouldn't be too happy that their MySQL queries were taking several seconds to return any results, that alone their website runs 5 queries on every page, so wasn't even loading for customers in under 10 seconds.
This is the reason you would use a SAN, the SAN will pool hundreds of the same disk together, and provide performance many times that of the locally attached disks, all be it with disadvantages which we'll look at below.
Meanwhile, fast forward now to today, and we have a vast choice of SSDs to chose from, all very reliable, fast and relatively cheap. Now you can put 6 SSDs with 800GB of capacity each, providing a total of 2.4TB capacity and several thousand IOPS each. Not only do these 6 SSDs provide you with thousands of IOPS more than even a SAN, but they're local too, meaning there is near zero latency on the bus (which is just as important!).
This graph shows the SSD random 4k IOPS performance vs a traditional magnetic hard disk drive (HDD).
The performance advantage is clear, now let's compare the various advantages of each solution versus the other in above scenario.
Storage Attached Network
A SAN is a dedicated device, or cluster of devices which contain arrays of disk drives mirrored and striped together to provide speed and redundancy. They're connected to hosts via the network (hence the term â€˜Storage Attached Network') commonly using either ISCSI or NFS protocols.A 3Par Enterprise Grade SAN
- Scalability - As a SAN is essentially one giant hard disk, with the physical hard drives abstracted from the presentation. Once the disks become full, it is simply a case of adding more disks in and growing your storage pools. With DAS, once the storage on a host is full, that host can no longer accept new virtual machines even if it has additional CPU and Memory capacity.
- Flexibility - As the SAN storage is shared between many hosts, virtual machines do not exist on any one physical server, but rather float on top. This means it's possible to migrate virtual machines from one host to another with only a few seconds downtime, providing a host a way to seamlessly upgrade, consolidate or failover without consumers noticing. With DAS, storage is local to the host, meaning virtual machines often have to be shut down to be migrated.
Direct Attached Storage
DAS is storage which is directly attached to the host, just like a hard disk drive is connected to your standard home computer. Using SAS or SATA as the bus protocol, there is no network between the storage and the host.An Intel DC S3700 Solid State Disk
- Decentralised - By having a few disks per host, any storage related incident is isolated to that host and the servers on that host. Should a SAN have a catastrophic failure then every virtual machine (probably thousands) will be offline, and the recovery process is extremely slow.
- Low Latency - As there is no network (eg network switch) between the disks and the host, there is very low latency (microseconds vs miliseconds). Low latancy is very important, the higher the latency, the more time the operating system has to wait to write or read files, which results in wasted CPU cycles.
- High Bandwidth - With SATA 3.0, each hard disk receives 6Gbps bandwidth, compared to a SAN which may have multiple 10GB ethernet links, but this would be shared between many hosts.
- Economical - Enterprise grade SAN solutions are extremely expensive, compared with DAS which is traditional, cheap and plentiful.
- Power Saving - A host almost always needs lcoal hard disks for it's operating system. In a SAN, these disks are often idle the majority of time consuming power without being useful. With DAS, the same disk that provide the OS also provide the primary storage for virtual machines, making use of the power. A SAN will often be two two large hosts and arrays of disks on it's own, consuming several thousand watts.
All in all you can see why moving from a SAN, back to traditional direct storage has become the answer for many hosts, big and small. With the main focus always being on reliability, modern SSDs coupled with high end RAID controllers and strict monitoring mean that disk failures go unnoticed, disks are hot swapped out and arrays rebuilt without the client even noticing.