Pressures on Storage

The explosive growth in digital data is the key challenge but not the only one. Changes in business and technical demands mean companies need to modify how they build and manage storage. Data protection requirements for applications are no longer fixed for their entire lifetime so flexibility is key.  Due to budget constraints, companies need to find ways to bring in new solutions that complement and extend existing capabilities. 

http://www.theregister.co.uk/2015/02/18/is_cloud_the_answer_to_all_your_storage_problems/

http://www.theregister.co.uk/2015/02/18/is_cloud_the_answer_to_all_your_storage_problems/

Digital Data Growth

Digital data growth continues unabated and, according to a recent IDC report, is estimated to double every two years. This growth and greater requirements for data retention, causes storage efficiency and cost to continue to be key concerns. Companies must find ways to reduce storage demands, replicate data, and eliminate risk with physical media. The largest segment of data growth is for capacity optimized storage systems. The digital data growth easily outpaces overall IT budgets that have been generally flat. This results in the realization of the need for cost-effective, easy to manage, and flexible storage solution in order to sustain growth by companies and managed service providers (MSPs).

lagombeige.jpg
lagomkeps.jpg

Flexibility and Responsiveness

The speed and protection requirements of data change over time, often rapidly. Future storage solutions need to be more flexible when it comes to meeting service level requirements of business services. It is essential that storage solutions can be managed dynamically as business requirements vary. Data protection requirement for applications are no longer fixed for their entire lifetime so flexibility is key.

Keeping Current and Future Proof

It is unlikely that the current ways of building and operating storage system will be sustainable in the future. However, few companies can afford a complete infrastructure replacement and so need to find ways to bring in new solutions that complement and extend existing capabilities. 

Solutions

Software-Defined Storage (it's a software world)

Software-Defined Storage (SDS) disrupts the market for enterprise storage by allowing the purchase of next generation software designed specifically for commodity hardware, and allowing for immediate and flexible hardware upgrades. This allows enterprises and MSPs to meet their storage growth needs, while avoiding lock-ins and inflexibility associated with using high-cost, traditional storage solutions. Analysts predict the acceleration of current trends, leading to the dramatic shrinkage of the traditional storage market dominated by EMC and NetApp.  The disruption that will happen to the IT storage industry is similar to the disruption brought onto telecom by the Internet or to retail by Amazon: a complete change in the value chain. As a result, traditional vendors will lose billions of dollars in business, while new ways of doing things will attract billions of dollars in business opportunity. Software running on commodity hardware is the prime enabler of this disruption.[1]

Deduplication (eliminating redundant data and allowing WAN replication for DR)

Data deduplication is a key technique for reducing the amount of storage space a company needs for its data. In most companies, the storage systems contain duplicate copies of many pieces of data. For example, the same file may be saved in several different places by different users, or two or more files that aren't identical may still include much of the same data. Another example is virtual machine images based on a single operation system version. Deduplication eliminates these multiple copies by saving just one copy of the data and replacing the other copies with pointers that lead back to the original copy. Companies frequently use deduplication in backup and disaster recovery applications.

 

More efficient use of disk space also allows for longer disk retention periods, thereby offering better recovery point objective (RPO) for a longer time. Data deduplication also reduces the data that must be sent across a WAN for remote backups, replication and disaster recovery. Very often, without the reduced WAN bandwidth resulting from   deduplication, replication and disaster recovery would not be possible.

Deduplication Versus Compression (20:1 versus 3:1)

Deduplication is sometimes confused with compression, another technique for reducing storage requirements. While deduplication eliminates redundant data, compression uses algorithms to save data more concisely. By contrast, deduplication only eliminates extra copies of data; none of the original data is lost. Also, compression doesn't get rid of duplicated data -- the storage system could still contain multiple copies of compressed files.

Deduplication often has a larger impact on backup file size than compression. In a typical enterprise backup situation, compression may reduce backup size by a ratio of 2:1 or 3:1, while deduplication can reduce backup size by up to 20:1, depending on how much duplicate data is in the systems. Often enterprises utilize deduplication and compression together in order to maximize their savings.

Deduplication and Erasure Coding (can work together)

Erasure Coding (EC) is a method of data protection in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across a set of different locations or storage media. RAID 5 and RAID 6 are the most common forms for EC. The goal of erasure coding is to enable data that becomes corrupt at some point in the disk storage process to be reconstructed by using information about the data that's stored elsewhere in the array. Erasure coding are often used instead of traditional RAID because of their ability to reduce the time and overhead required to reconstruct data. The drawback of erasure coding is that it can be more CPU-intensive, and that can translate into increased latency. Deduplication can be used with EC.

Object Storage (simple, scalable, cost effective)

Object storage is becoming an important and accepted alternative to traditional file and block storage.  Object storage is vastly more scalable and cost effective than traditional file system storage because it is simple and focused.  It can use low cost commodity servers, hard drives, and networking, which are all important characteristics for SDS systems.  Leading object storage vendors, such as Scality, and Cleversafe are successful selling to MSPs and in-house for private cloud storage solutions.  The revenue opportunity for object-based system is projected to grow to $42 billion by 2020 with software opportunity at $5.5 billion.[2]

Just One Data - hyperscale data reduction and Hummingbird Deduplication NAS

Just One Data created a cost-effective high performing web-scale storage data platform that is easy to manage and flexible to address the rapid data growth challenge. Hummingbird deduplication NAS supporting traditional and object storage with WAN deduplication-aware replication is our first product showcasing our storage platform.

[1,2] http://www.scality.com/wp-content/uploads/2014/11/Market-Study-The-Future-of-Storage-by-Jerome-Lecat-Nov-2014-v1-15A.pdf