The next generation of Flash strategy

As is generally understood in the industry, CPUs are expected to receive a performance boost every 18 months or so according to Moore’s Law, but that IOPS/spindle (the primary measure of rotating disk performance) has been largely limited to discouragingly modest performance increases in recent times. This ever-widening gap between CPU performance and storage performance is hindering the full utilization of the increasing availability of storage capacity.

Industry forecasts indicate that some of the latest IT trends will further burden the current storage performance, particularly with the explosive growth in data creation and emerging technologies such as virtualization and cloud computing. IDC projects that the digital universe will reach 40 zettabytes (ZB) by 2020, an amount that exceeds previous forecasts by 5 ZBs, resulting in a 50-fold growth from the beginning of 2010, and that the accelerating worldwide spending on public IT cloud services is expected to approach US$100 billion in 2016. There is also a completely different set of dynamics at play in the form of new performance-intensive workloads. Lack of the knowledge required for these evolving performance needs, such as workloads and storage array characteristics for decision-making on storage architecture transformation, will ultimately create untenable challenges for IT practitioners and executives.

The Latest Flash Storage Arrays

With the increasing openness of today’s network, traditional mechanical storage technology can no longer keep up with the desired performance capability, and this is where flash technology comes into its own and fills this gap. Emerging mixed types of flash deployment are readily available in the market, offering different performance characteristics and capability, so that enterprises from any industry and of all sizes can benefit. These include:  

  • All-flash Arrays – These offer persistent storage, linear scalability, sub-1 millisecond latency, deduplication, and a “zero planning/tuning” model. Applications requiring highly consistent performance but still with random I/O that are too large for server-based flash, and/or that don’t respond well to caching and tiering algorithms, are suited for an all-flash array, for example, Virtual Desktop Infrastructure (VDI), virtual servers, and database test-and-development work.  
  • Hybrid Arrays – A variable proportion of flash, with automated data placement (hot data on flash and cold data on HDD) provides a balance between mid level performance and relatively low price. These are best for users who have large data sets and mixed workloads with some tolerance for occasional latency, including data warehousing, online transaction processing (OLTP) and email.  
  • Server Flash as Local Storage – This is characterized by a high-capacity PCIe flash card (that uses MLC media) deployed in the server as a local storage device for application acceleration. By storing data locally on the server, it improves the performance of application reads and writes by reducing latency and accelerating throughput. High-transactional and/or high-performance workloads associated with web 2.0 applications, VDI environments, high-performance computing (HPC), and high-performance trading applications are most appropriate for this configuration. It can also be used to accelerate analytics, reporting, data modeling, indexes, database dumps, batch processing, background tasks and other temporary workloads.  
  • Server Flash as Cache – Server-based hardware storage coupled with intelligent server flash caching software will accelerate reads and also leverage on a write-through algorithm to ensure that newly written data persists in the networked storage array for continued high availability, integrity, reliability and disaster recovery. This particular deployment best fits web applications, OLTP, customer relationship management (CRM) and enterprise resource planning (ERP) databases, email applications and other read-intensive workloads with small working sets.