Why All Flash Arrays Now… a Perfect Storm Perspective

As technology becomes more and more pervasive into our everyday lives—personal and professional—we have come to expect even more of technology in our business world. For example, we have come to expect the same level of responsiveness in real-time that we get from a stock update app on our iPhone as we do from a business analytics application that is crunching hundreds of thousands of entries across several databases, pushing these traditionally batch oriented applications toward an on-demand mode. Why? To deliver an increasingly faster time to value. 

Over the last few decades processing and networking technologies have doubled their performance every eighteen months (following right on schedule according to Moore’s Law). But, the mechanical limitations of hard disk drives have prevented storage performance from advancing at the same speed.

So, the ability existed to create, capture, and process vast amounts of data but there were limitations because of the speed at which data can be moved on and off of persistent storage to be processed. This led to major input and output (I/O) bottlenecks.

The result? Architectural designs came along to mitigate portions of this performance gap through things such as the profiling of read access patterns to pre-fetch and cache data in main memory, in anticipation to future reads. 

To complicate matters a little further virtualization came along, making it possible for a single physical server to run multiple virtual machines, thereby increasing the I/O throughput and capacity requirements per physical server, and rendering some of the architectural optimizations ineffective as data patterns became much more random.

The result? 

  •     Higher demands for enterprise application responsiveness;
  •     A widening performance gap between processing, network, and storage;
  •     Broad adoption of virtualization, even for I/O intensive applications such as analytical processing over databases.

This created a perfect storm for IT.

And that is exactly when Flash came to the storage rescue, completely disrupting some very important assumptions under which IT had been operating. It not only dramatically improved application response time but also allowed applications to scale to extraordinary levels of data usage 

What’s so special about flash?

  •     Flash delivers unprecedented data access performance for both random and sequential data, delivering superior application throughput and response time performance for virtualized environments;
  •     Flash is a semiconductor technology, not only benefiting from the same relentless improvements fuelled by Moore’s law, but also involving no mechanical parts
  •     Flash consumes much less power, as there are no disks to spin and no mechanical arms to move, and requires much less cooling, helping alleviate overtaxed data center environments.

But is this specialized weapon of a drive expensive? Given the explosive adoption of mobile devices using flash -iPods, iPhones, iPads, tablets of all makes and models— flash technology has become more mature and affordable, to a point where with the right combination of storage array technology it can meet the price/performance requirements of the applications of today. Which led to the birth of purpose-built all flash arrays.

What does that mean exactly? It means All Flash Arrays demand a complete re-imagination of the design philosophy that optimized for mechanical hard disk drives, and they need to start over with a design philosophy that optimizes for solid state flash drives.

All Flash Arrays need to be architected from scratch to address three different points:

  •     First and foremost, exploit the unique native capabilities of flash for performance and random I/O!
  •     Secondly, mitigate the constraints of Flash technologies, especially the ones not present on HDD arrays. Specifically, while disks have an unlimited number of writes, Flash has a limited lifecycle for a limited number of writes, after which the memory can no longer be reprogrammed. All Flash Arrays must be designed to extend the endurance of the flash drives by minimizing the number of times the same data is written, copied or moved. This in turn means that every part of the array, including how data is laid out, whether data services are processed inline or after data is written, RAID data protection, snapshot and replication architectures, and more should be re-architected to minimize un-necessary writes to extend the lifecycle of the array. 
  •     And last but not least, leverage all the lessons learned from HDDs for delivering an enterprise array, such as high availability, data protection, data reduction, and other data services, while focusing the design on emerging use cases, enabled by adjacent technologies such as virtualization, multi-core, and high-speed networks. It means to completely rethink how enterprise shared storage services can be tightly integrated with your applications for things like VM cloning, database test/dev and self-service. Flash is about much more than performance.

In summary, All Flash Arrays: 

  •     Narrow the performance gap between processing, network and storage, eliminating some enduring Input/Output challenges;
  •     Meet the very random data access needs of strategic applications such as Virtualized workloads, virtual desktop infrastructures, and database & analytics, enabling them to scale, and;
  •     Allow a whole re-design of storage services that are wholly enabled by Flash technology, bringing a new era for storage arrays. With the dawn of all flash arrays.

Eric Goh is the Managing Director at EMC Singapore