Cloud Optimized Storage Solutions: Part 2 – How is content stored?

by dave on January 5, 2009


In part one of the COSS series, we discussed the nature of content within the cloud.  After determining the nature of the content being stored, it is important to understand how this unstructured and structured content will be stored.  The mechanism for storage has significant impact on a provider’s Service Level Agreement (SLA) to the end user and can also promote the concept of tiered storage within the cloud as well.  To evaluate this storage mechanism, you need to pull apart the storage path to determine what influences it has on the content being stored.  Technologies such as encryption, de-duplication, compression, object-model storage (a la XAM), and even the underlying file system can have a large part in storage and the subsequent retrieval.

COSS Part 2: How is Content being Stored?

COSS Part 2: How is Content being Stored?

Filesystems

One of the first paths to evaluate is the role of the underlying file system within the COSS environment.  Hooks within the cloud storage environment typically dictate the use of open IP protocols such as NFS (Network File System) or CIFS (Common Internet File System) as an underlying method of storage.  These filesystem types are easily portable and have wide-spread adaptability and connectivity to a variety of host types and operating systems.  Other types of filesystems that might be present within a COSS environment are what would be considered as “Open File Systems.”  These can be best represented by NTFS, XFS, ZFS, EXT3, ReiserFS, etc. and can also be vendor or client specific and proprietary.  Additional file systems that could be utilized would fall under the label of “Closed File Systems” and would include HP-UX, AIX, mainframe system storage, and the like.

Objects

Another method of content storage is using Object-based storage placement, best represented by XAM.  Within the object method of storage, content is identified by meta-data and placed within a general pool of storage, similar to an open volume. This type of storage allows for data placement anywhere within the storage device with dependencies only on a hashing or metadata index to point to the actual physical storage location.  The flexibility with this model of storage is that it truly could be “storage anywhere,” a key tenet of cloud computing.  By localizing an index to a physical device with knowledge (or awareness) of a remote system, the meta-data or hash could be in one location (or many based on intrinsic replication) and the object in multiple locations.  This type of model would be particularly effective in “edge” devices which serve simply as access points to geographically localized data.

Content Manipulation

A third dynamic to COSS storage is the nature of content manipulation.  When looking at compliance-driven cloud storage models later in this paper, the impact of data manipulation becomes a crucial tipping point in determining the validity of a cloud storage model as it pertains to enterprise fitment.  Nonetheless, it is important to understand the types of content manipulation that could conceivably be put into place by COSS, namely: deduplication, encryption, compression, and optimzation.

Deduplication has been the media darling of data storage of late due to its ability to “reclaim” storage space on the primary storage target while maintaining data integrity and portability. This technology can readily be found in such products as EMC’s Avamar, NetApp’s FAS Deduplication, and Data Domain’s DDX series as well as other smaller product sets.  As an overview, de-duplication is a process that runs against file data looking at commonality factors (repeated binary strings, as it were) and strips redundant strings from the source files, replacing them with pointers.  By virtue of this process, the original data, while still the “same” from any observable point, is technically “changed” and a second iteration of that data becomes operational. Consequently, any level of data hydration reverts the second iteration to the first, again, effectively changing the source data.

Another factor of deduplication is the insertion of the deduplication process in the data stream.  There are currently two different approaches to processing data for deduplication: inband and out-of-band.  Inband data deduplication processing dictates that data must be processed via appliance (or other type of technology) prior to being stored on the target.   Out-of-band deduplication (also known as data at rest) processes the data after it arrives at its designated storage space, thus keeping the file or object in place, reducing file/object placement issues (if no policy is enacted at the time).

Encryption is another method of content manipulation that can be present within the COSS based on security and/or compliance metrics.  Typically encryption is either processed inband or out-of-band based on architectural design and encompasses various levels of security.  For the purpose of this paper, RSA’s AES encryption algorithm will be used as an exemplar of encryption methodologies.  AES uses two different factors to encrypt:  key sizes (variable) and algorithms. For key sizes, 128 or 256 bit modes are common but 192 bit keys could be used.  In terms of algorithm usage, any number of block cipher modes could be used :  Cipher-block chaining (CBC), Cipher feedback (CFB), et al.  In the end, these methods of encryption can adversely affect other storage-level processes like compression and deduplication because they prevent direct access to the byte layer of the file or object.

Encryption, like deduplication, can be done inband or out-of-band.  EMC’s PowerPath software, for example, utilizes hooks to the RSA AES key manager to encrypt data from the host to storage over any path as a method of host-based inband storage.  Cisco’s SSM module encrypts data from within the fabric, again, as another method of inband encryption.  Out-of-band encryption could be accomplished by any level of appliance that operates on files/objects committed to storage already .

Compression is a third method of data storage manipulation again aimed at reclaiming “space” on a storage target.  Compression is “the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use through use of specific encoding schemes. ”Similarly to deduplication, a change is affected on the underlying data object such that the relative bit size on disk is reduced and this operation can take place both inband and out-of-band. Products like EMC’s RecoverPoint utilize compression during the process of replication to allow for better bandwidth utilization.

Optimization is a level of data manipulation that is COSS-bound, meaning that it is explicitly initiated and run by the COSS in order to maintain the underlying filesystem and indexes.  A symptom of the storage system that might require optimization would be filesystem fragmentation and the consequent optimization would be defragmentation to better optimize data layout for quicker reads/writes. Other optimizations that  could be present are dirty page flushing, cache committs and flushing, as well as other system-level processes based on the operating system layer.

As noted, content storage on COSS can be as simplistic or as complicated as the architecture demands it to be.  Multiple protocols exist within the array to not only store data effectively but also to optimize and secure the data in such a fashion as dictated by the service provider or customer.  With this level of data engagement, how then can performance of the underlying COSS system be tailored from an SLA standpoint?


Feedback, as always, is most welcome!
Cheers,
Dave Graham
Reblog this post [with Zemanta]

Share
  • Would SLA's ever come into action with COSS? I would say that the type of app/workload likely to use it could most likely never be classed as requiring stringent SLA (maybe for now anyway).

    Maybe the lack of definition of SLA for cloud to date is why Cloud is being made so tangible, once tools and architecture is implemented within “cloud” architecture it moves back into the dark ages (i.e. today) and back to the hosting model.

    Hopefully i've not missed the point here

  • Would SLA's ever come into action with COSS? I would say that the type of app/workload likely to use it could most likely never be classed as requiring stringent SLA (maybe for now anyway).

    Maybe the lack of definition of SLA for cloud to date is why Cloud is being made so tangible, once tools and architecture is implemented within “cloud” architecture it moves back into the dark ages (i.e. today) and back to the hosting model.

    Hopefully i've not missed the point here