So today I’m going to talk about, you guessed it, flash storage again. Sooner or later I think it will become clear that Flash is a disruptive force in IT. People have called many things game changers throughout the years, but I’ll go on record now- Flash as a primary means of storage changes everything about the modern Data Center and user application delivery. You read these press releases all the time: Michael Dell “EMC DSSD is a game changer” or Virtualgeek “DSSD is face melting.”
I’m going to really explain to you why I wrote my first article about Flash Storage, why Brian Ethington wrote about Flash Storage, and why we over at Team Storage talk about Flash Storage more than just about any other technology out there. It’s simple really- Flash gets faster the more parallel the workload up to the limits of the underlying controller. In other words, the performance generally doesn’t degrade with load until you hit the technical limits of the underlying controller or SAN architecture. I’m getting a little ahead of myself though. Let’s start with a basic primer on Magnetic vs. Solid State Disk Drives.
A Magnetic Hard Disk Drive is a fairly complex piece of machinery that many of us take for granted. Image below thanks to Wikipedia.
The disk platters in the image above (generally 3-4 per disk) store the data magnetically and generally spin between 4800RPM and 15,000RPM. The actuator arm moves the head across the platters, without actually touching them, to read and write the magnetic information physically stored on the platter. As you can probably guess, the head can only interact with a single physical location across the platters at any one time. The way we’ve been making drives faster for years is pretty easy to understand- make the platters spin faster or store more data per square inch on the platters. It’s not rocket science, but then again maybe it is…
The spinning platter exerts as much as 600,000G’s of force on some of the drive internals. The hard disk head sits at a distance equal to about 20 nanometers (equivalent to about 40 atoms) off of the platter surface. When the drive reaches full rotational speed, the spinning of the platter causes air to flow under the sliders of the head creating lift and keeping it from touching the heads and destroying the data. This is turning into rocket science faster and faster isn’t it? Let’s amplify the scale here to better grasp the concept of how this works. I don’t think that I can state this any better than Tom’s Hardware did years ago:
The dimensions of the head are impressive. With a width of less than a hundred nanometers and a thickness of about ten, it flies above the platter at a speed of up to 15,000 RPM, at a height that’s the equivalent of 40 atoms. If you start multiplying these infinitesimally small numbers, you begin to get an idea of their significance.
Consider this little comparison: if the read/write head were a Boeing 747, and the hard-disk platter were the surface of the Earth:
- The head would fly at Mach 800
- At less than one centimeter from the ground
- And count every blade of grass
- Making fewer than 10 unrecoverable counting errors in an area equivalent to all of Ireland.
That sure sounds quite a bit like rocket science to me. We’ve hit some real physical limits in terms of platter density and spindle speeds, and the only way to make magnetic hard disk drives faster is to use denser platters or implement faster spindle speeds. The next generation of high density drives are going to do things like using lasers to heat the surface of the platter while writing and reading data to increase the density of stored data on the platters (HAMR – heat assisted magnetic recording). Before we all grab our parachute pants and start singing Hammer Time, the reality is that some industry insiders are skeptical the disks will even make it to market since the technology is so incredibly difficult to implement. Imagine trying to fire lasers at grains of sand in an F3 tornado and you have an idea of the technical hurdle folks like Seagate are up against. Hard disk manufacturers have had to implement noble gasses inside the drive to limit turbulence at spindle speeds and densities used today.
Magnetic recording has hit some extremely difficult physical limits that aren’t easy to engineer your way out of. Getting heads to accurately read dense platters becomes more and more difficult as spindle speed increases, but high spindle speed is the only way to reduce latency. Add to that the fact that hard drive design inherently limits the ability of the disk to perform concurrent operations and you can see that the long term future of the Data Center only includes spinning disk in a limited capacity, such as cold storage, dense backup, and other applications where latency simply isn’t a concern. Just two years ago you could make the case that 8 15k disks in Raid 10 would offer similar throughput to a single enterprise class SSD. We’re nearing a point very soon where that argument will no longer hold water. Flash can handle more concurrent operations than traditional disks and actually benefits from concurrent load. For an in-depth look at how Solid State Disks work internally I recommend reading this. It’s a very comprehensive look into the inner-workings of flash-based storage.
If you take away anything about the architectural differences between mechanical and solid state disk drives it should be this- mechanical disks measure latency is milliseconds; NAND (solid state storage) measures it in nanoseconds. The time it takes to access data makes things feel fast in computing generally speaking. Throughput is far less important to most applications than access latency, and perhaps the biggest advantage is that as workload goes up latency on flash should stay relatively flat, within parameters of the arrays/controllers being used. With mechanical disks once the drive buffers are full and the workload becomes more random you will quickly see latency begin to increase exponentially. Access times scale almost linearly with load… until they don’t… and then things become much slower very quickly. A hybrid storage model will continue to make sense until the point where flash storage reaches a cost/capacity parity with traditional hard disks. Without some radical scalable solutions in traditional hard disk design we will continue nearing that tipping point at an alarming rate thanks to Moore’s Law and advanced data compression and deduplication techniques.
It’s not all unicorn giggles and rainbow sprinkles in the world of flash however. We have some real scalability limitations we need to deal with sooner rather than later. Find out what they are in Part 2.
Jose Adams, Engineer