Cloud computing

The Destination

by Dave Graham on October 22, 2010


So, it appears that I’ve been relatively successful in casting some level of intrigue around where I’m going after EMC. I’ve heard everything from VMware to Acadia to heaven-knows-what and while each of these companies are AWESOME for what they do, they’re not what’s grabbing my attention.

When I started into the cloud space, I was amazed at the capabilities that “the cloud” offered. Whether you choose to use the public, private, or (dare I say it?) hybrid monikers for how you implement a cloud ecosystem, one fact still remains: data needs to moved. Obviously, EMC has spent a considerable amount of time and effort into making solid product set in vBlock and Atmos and the recent acquisitions of Bycast by NetApp, the partnerships with Caringo and Cleversafe by others, et al. all serve to drive this point home. Face it, the cloud is here and it’s not going away.

With that in mind, I decided (and it wasn’t an easy decision) to look at some of the technologies that are being developed in the cloud space and jump in with both feet. The initial brush with this was Atmos. As one of the guys responsible for the development and sustaining of Atmos Virtual Edition (I won’t claim this was my idea by any stretch…there’s a LOT of talent wrapped up in the Atmos group that had input here), I recognized early on that easing transitions to the cloud by “re-using” hardware that was already present was a good thing. Virtualization made this even easier as everyone these days is thinking along those lines…Once I saw the impact that Atmos Virtual Edition had, the next logical step was the migration of block assets to the cloud, whether this be Atmos, S3, Iron Mountain Digital or another technology. To that end, on October 25th, 2010, I will become a Senior Systems Engineer at Cirtas Systems, Inc.

I’ll be writing more on Cirtas and their exciting Bluejet appliance later (as I get into the thick of things) but I’m very impressed with the capabilities that they offer and look forward to working with you (EMC, NTAP, HP, Dell, et al) in this exciting space!

See you on the other side…

cheers,

Dave

Enhanced by Zemanta
Share

{ 7 comments }

Atmos 1.3 Released!

by Dave Graham on February 16, 2010


[Removed by Request]

Share

{ 5 comments }

Why Policy is the future of storage

by Dave Graham on September 20, 2009


As many of you may know, I work for EMC‘s Cloud Infrastructure Group as part of the Atmos solution team. In this role, I’ve been blessed with getting a closer look at where the future of cloud storage is going as well as some of the drivers that will get it there. In this post, I’d like to talk a bit about policy and how this will shape the future of storage. I’m going to keep this as abstracted from product as possible, but where appropriate, I’ll try to show you how products are implementing this technology TODAY.

What is Policy?

By definition, policy is “[an] action or procedure conforming to or considered with reference to prudence or expediency” (dictionary.com for that definition).  When viewed in the context of storage systems and management, policy, then, is the actions (scripted or otherwise) that influence data to provide for retrieval, performance, or manipulation by systems.  In other words, policy is an engine that manages data from start to finish.  Why this is important requires us to look at what the typical management stack looks like today.

Data is created by users accessing programs that are tied to physical and virtual resources.  This generated data is then processed and stored by the programs and their underlying storage I/O layers (LVMs, hypervisor I/O stacks, etc.) onto some sort of storage device (SAN, NAS, DAS, etc.) where it sits until next access.  In essence, once data is created it is considered to be “at rest” until it is next accessed (if ever).  Within this data generation and storage continuum, the process is fundementally simple as generated data is put directly to storage.  However, if the data continues to sit in the same place endlessly, it’s typically inefficient to retrieve and access.  Managing this data was typically a manual process where data, LUNs, and their topologies had to be moved around using array or host-based tools to provide better “fit” for data at rest or data accesses for performance.  This is where policy steps in.

Policy uses hooks into data (also known as metadata) in order to enact controls.  Please see this post for more detailed explanation of metadata.

Why use Policies?

If the previous example shows anything, it’s that the management of data is fundementally…well, boring and manual.  Policy provides a method of controlling the stack of data ingest AND data management while allowing business to continue to generate, retrieve, and manipulate data.  For example, a simple policy that could be enacted against data could be as follows:

if data < 14 days old, store on EFD drives, LUN 11; if > 14 days old, store on SATA drives, LUN 33

Obviously, that’s a high-level abstraction of what the actual process for data control would look like but drives the point home.  What used to be a manual LUN migration policy to “performance” or “store” data now is set based on a logical control structure that can be automagically enacted on the storage system itself.  A working example of this type of policy can be seen in the tiering provided by Compellent and EMC’s FAST systems for storage management.  Pretty cool, huh?

An alternative method of control that isn’t necessary tied to the storage array is the recent introduction of VMware‘s Storage DRS (Dynamic Resource Scheduling) which is enacted against the storage I/O stack of VMware’s vSphere hypervisor.

The Future of Policy

Obviously, my examples are very simplistic in nature but hopefully, they make the policy technology somewhat more accessible.  As far as policy futures are concerned, this is where storage technologies (and even host process management) will be going.  In the future, simple policy creation and enforcement will be a necessary part of storage pool creation and integration as well as the ongoing maintenance and support of storage arrays.

As always, feedback is welcome!

edit: 9/21/09: removed a mis-aligned reference to Atmos storage policy.

Reblog this post [with Zemanta]
Share

{ 9 comments }

Micro-burst: Retrofit or Net-New?

August 12, 2009

I’ve been ruminating on a conversation that I was part of at the recent Cloud Camp – Boston “un-conference.”  In this particular case, a customer (a VAR; NOT a manufacturer) was talking about leveraging cloud storage for a particular customer of theirs who had the following “essential criteria” that needed design help:  multiple petabytes of [...]

Share
<br />