nearline archive

Going Tapeless in Enterprise

by dave on January 28, 2009


I was asked the other day about the potential for finally going tapeless in Commercial and Enterprise spaces.  Truth be told, this is becoming a more common occurence as those mechanical beasts hit the tail-end of their maintenance windows.  With that in mind, what are some (not all) of the business drivers that move enterprises from tape to disk (or other mediums)?

[click to continue…]

Share

{ 4 comments }


Obviously, I do actively read and/or manage my blog. To that end, one of the nifty little features of WordPress (and undoubtedly other blogging sites) is the ability to “see” what search terms people are using to land on your blog posts. One of the most fascinating searches had to do with the phrase “Centera vs. Symmetrix.” There are other good search metrics that I’ve seen but I thought I’d delve into this for a second.
Centera vs. Symmetrix
As you’ve undoubtably read before, I did a quick drive-by of the Nextra and in it, promoted the concept that Nextra could become a significant competitor to EMC’s Centera. While this may be slighting the Nextra and Centera somewhat, it does point to the fundamentals of near-line archive being a significant battleground in the coming years. So, to flip this on its head a little, let’s look at the Centera vs. the Symmetrix as wholistic entities dedicated to storing YOUR information.The Symmetrix is a purpose-built, multi-tiered storage system with infinite expandability (well, finite, really, but hyperbole works well, right? ;) ) , connectivity, and AT LEAST 3 tiers of discrete information storage (Tier 0 [SSDs], Tier 1 [fibre], Tier 2-5 [SATA]). The Symmetrix will connect to anything from mainframes to lowly Windows 2003 Servers. It has completely redundant pathways to your data and features a high speed internal bus interconnecting the blades.The Centera is a system based on the RAIN (Redundant Array of Independent Nodes) principle. By itself, the Centera is realistically nothing more than a purpose-built 1U server with specialized policy-based software sitting on top of a very stable Linux OS. (The Centera guys will more than likely want to harm me for distilling it down that far). However, moving the Centera “nodes” from standalone to clusters (aka 4-node “base” units) really changes things and highlights the power of the OS and hardware. Connectivity is limited to IP only (GigE, please!) and the nodes communicate with each other over IP (dedicated private LAN) as well. Not quite as flexible as to the front end connectivity and definitely not the champion of speed by any stretch of the imagination (thanks to SATA drives), but very servicable when using the API to communicate directly. Remember, the Centera is geared toward archive, not Tier 0-3 application sets (though, it appears to function quite well at the Tier 2-5 levels depending on the application).

Hopefully, you’re seeing a pattern here that will answer this particular tag search. If not, here’s the last distillation for you:
Symmetrix
: multi-protocol, multi-Tier, high speed storage system
Centera
: single protocol, single-Tier, archive storage systemCapiche? ;)

SAS vs. Fibre Challenge

Again, as I’ve pontificated before, I challenge anyone to point out SAS’s shortcomings as it pertains to reliability and performance vs. fibre drives. I see the market turning to SAS as the replacement for Fibre drives and, well, we’ll see where that goes. To that end, I’ve got an interesting challenge for you readers:

The Challenge:
a.) I need someone with a CX3-10 and someone with an AX4-5 base array, with fibre drives and SAS drives respectively.
b.) I need the fibre and SAS drives in a RAID5 4+1 config with a single LUN bound across it (no contention of spindles
c.) I need you to run either the latest version of IOMeter or OpenSourceMark (the FileCopy Utility) against that LUN and report back the information.
d.) I’ll compile the table of data results and, if I receive valid results from multiple people, I’ll send you an EMC t-shirt for your time (to the first responders).Sound like a deal? GREAT!(I’d do it myself but I have no budget for these things…)

Checking out now…

Dave

Technorati Tags: , , , , , , , , , , , , , , , , , ,

Share

{ 6 comments }

Thoughts #1

by dave on February 14, 2008


Being a slave to technology really isn’t as bad as everyone thinks. For example, I’m currently sitting in a car dealership, waiting for my oil to be changed and, well, banging away at the tiny keys on this Blackberry 8800. To that end, I’m able to take some time to review some of the stories that I feel have had some measure of impact in the storage world.

First off, XIV going to IBM. (linked here, here, here, and here), I never knew Moshe Yanai and honestly, the particular markets I work in don’t really necessarily benefit from the Symmetrix or it’s architecture. So, fundamentally, I could care less about IBM taking on the Symm in Enterprise (versus some of the other folks out there blowing hot air about it…) That being said, in reviewing the Nextra white papers and comments/blogs of folks who are more intelligent than I, I can see where Nextra could have trickle down impact on other discrete EMC (or competitor) products, namely nearline archive or A level Commercial accounts.

The devil is in the details though…For example, Centera has always operated on the RAIN (Redundant Array of Independent Nodes) principle and consequently the architecture that it encompasses has a very long product lifecycle. Changes can be made at the evolutionary level (i.e. Shifting from Netburst P4 processors to Sossaman Xeons and accompanying board logic) vs the revolutionary and literally, software/OS changes can cause the most impact on performance. At the end of the day, the differentiation is at the software level (and API integration, lest we forget :)) not hardware. Where Nextra seems to throw itself into the ring is in fundamental flexibility of hardware. Don’t need quite the same processing threshold as Company X but want more storage? Use less compute nodes and more storage nodes. Need to have more ingest power? Add connectivity nodes. Etc, etc. This blade or node based architecture allows for “hot” config changes when needed and appears to allow for pretty linear performance/storage scaling. Beyond the “hot” expansion (and not really having any clean insight into the software layer running on Nextra), one has to asusme that at minimum, there is a custom-clustering software package floating above the hardware. Contrasting this to Centera, then, what really is the difference?

A.) Drive count vs. Processing/connectivity count. With Nextra, you can have an array biased towards storage or connectivity. With Centera, each node upgrade that you add has the potential to be both storage AND connectivity but requires reconfig at the master cluster level.
B.) Capacity. Nextra is designed to scale to multiple PB’s by introducing 1Tb drives as the baseline storage config. Centera, while currently shipping with 750Gb drive configs, will obviously roadmap to 1Tb drives for 4Tb per node (16Tb per 4 node cluster; that’s raw storage prior to penalties for Centrastar, CPP or CPM, etc). Again, Centera is designed from a clustering standpoint so, feasibly, a multiple petabyte cluster is with reason, provided the infrastructure is there.
C.) Power. No contest really. Centera is designed to function within a very strict “green” envelope and from the hardware perspective is very “performance per watt” oriented. (Granted, I believe that they could eek more performance out of a low power Athlon64 processor while keeping within the same thermal/power guidelines…but, I digress..). Nextra, again by design, fits into a enterprise data center grade power threshhold and, consequentlu, even with using SATA drives, will have much higher power consumption and overhead. If they use spin-down on the disks, then perhaps they can achieve better ratios but if the usuage profile per customer doesn’t fit, they’ve mitigated it’s advantages.

Anyhow, I’ll probably revise this list as we move along here, but…I just wanted this to be food for thought.

cheers,

Dave

Share

{ 1 comment }