February 2008

Thoughts #2

by Dave Graham on February 19, 2008


So, in an effort to simplify my life, i’ve taken to keeping the subject lines on my posts as simple as possible (well, that an the fact that I’m usually responding to 15 different blogs within the context of one big post). Today, I’ll try to break things out as cleanly as I can. Deal?

Storagezilla: Where PCs go to Die
:
Grand title, as you’re aware and I definitely feel that there are two different threads to take a look at. Like Storagezilla, I maintain an active subscription to National Geographic. In last month’s issue, there was an article regarding computer disposal and the accompanying issues that arise once your Mac/PC/Whatever are picked up at the curb.  The most damning information came forth regarding where your electronic waste is ending up; typically, in 3rd world countries (sovreign African nations most notably) with little to no import restrictions in place regarding hazardous material.  To put this into context (as Storagezilla so notably explains), we’re having a little issue with a US Spy satellite that has 1,000 pounds of Hydrazine in its fuel cells.  All this furor is being created about the relative toxicity of Hydrazine when, just a scant 5,000 miles away from the US, we have millions of folks that are being exposed to hazardous fumes, lead, mercury, and any other number of chemicals/carcinogens that can (and do) cause birth defects, cancer, etc.  While I won’t go so far as to say “Don’t buy a computer,” I think that Green Peace (toxicity report is here) is on to something by releasing toxicity reports for some mainstream consumer electronic gadgets.  So, what can we do?

The answer is both simple and complex.  The simplistic approach dictates mandated recycling programs that keep commercial, consumer, enterprise “trash” within the US (or host country).  Tax the hell out of the things if you need to maintain the programs but, constrict the ability of exporters to dump our waste to other locales.  The complex answer is…well, tighten export controls on electronic waste, audit recycling companies for compliance to federal/international “waste” standards (RoHS, et al.), tax the hell out of consumer electronic waste (even $5.00 per sale of every computing system would be enough, I’m sure), and actually MONITOR the process.   Not to turn this political at all, but this is a bipartisan matter that won’t survive if it’s strictly a Republican or Democratic or Green Party thing.

Thoughts?

Rough Type: EMC’s “very massive” Storage Cloud

So, this is a new blog that I got turned on to (the leap came from Storagezilla to O’Reilly Radar to Rough Type) has a lead article on EMC’s Cloud storage initiative. While Nick Carr spends most of the blog post regurgitating news from Chuck Hollis, I did find it refreshing to read the following comment:

“Like other traditional IT component suppliers, EMC sees cloud computing as both
threat and opportunity. On the one hand, it could put a large dent into
individual businesses’ demand for on-site storage systems, EMC’s bread and
butter. On the other hand, somebody has to build and run the storage cloud, and
EMC has the scale and expertise to be a big player in this new business.” (Nick Carr)

Quite simply, Cloud storage (if that’s the defined term that we’re using) is a natural extension of Software as a Service and Storage as a Service (both are SaaS, right? ;) ).  EMC’s first foray into this world was through the acquisition of Mozy and now, with a more SMB focus in hand, they’re moving to other avenues of the “as a Service” model.  The purported integration with SAP, for example, would promote the Software service end, while EMC quite easily could pickup the Storage service end.  Is it a perfect union? Time will really tell, but, like Amazon found out, service based storage/grid/etc. isn’t a bed of roses. What EMC needs to plan for (similar to any array that they manufacture) is redundancy, availability, and performance.  Literally, we need to practice what we preach when it comes to the deliverables. We’re constantly injecting ourselves into the customer’s ILM strategies and advising them on best practices, but if we can’t implement them ourselves (and at a wider scale than we have ever done), we’re hypocrites of the greatest kind.  I’ll leave it up to the smarter people I work with the figure that mechanism of protection out, but…let it be known that the world’s gauntlet has been thrown in our direction.

Thoughts?

I’ll update later as time permits.

cheers,

Daved

Technorati Tags: , , , , , , , , , ,

Share

{ 0 comments }

Thoughts #1

by Dave Graham on February 14, 2008


Being a slave to technology really isn’t as bad as everyone thinks. For example, I’m currently sitting in a car dealership, waiting for my oil to be changed and, well, banging away at the tiny keys on this Blackberry 8800. To that end, I’m able to take some time to review some of the stories that I feel have had some measure of impact in the storage world.

First off, XIV going to IBM. (linked here, here, here, and here), I never knew Moshe Yanai and honestly, the particular markets I work in don’t really necessarily benefit from the Symmetrix or it’s architecture. So, fundamentally, I could care less about IBM taking on the Symm in Enterprise (versus some of the other folks out there blowing hot air about it…) That being said, in reviewing the Nextra white papers and comments/blogs of folks who are more intelligent than I, I can see where Nextra could have trickle down impact on other discrete EMC (or competitor) products, namely nearline archive or A level Commercial accounts.

The devil is in the details though…For example, Centera has always operated on the RAIN (Redundant Array of Independent Nodes) principle and consequently the architecture that it encompasses has a very long product lifecycle. Changes can be made at the evolutionary level (i.e. Shifting from Netburst P4 processors to Sossaman Xeons and accompanying board logic) vs the revolutionary and literally, software/OS changes can cause the most impact on performance. At the end of the day, the differentiation is at the software level (and API integration, lest we forget :)) not hardware. Where Nextra seems to throw itself into the ring is in fundamental flexibility of hardware. Don’t need quite the same processing threshold as Company X but want more storage? Use less compute nodes and more storage nodes. Need to have more ingest power? Add connectivity nodes. Etc, etc. This blade or node based architecture allows for “hot” config changes when needed and appears to allow for pretty linear performance/storage scaling. Beyond the “hot” expansion (and not really having any clean insight into the software layer running on Nextra), one has to asusme that at minimum, there is a custom-clustering software package floating above the hardware. Contrasting this to Centera, then, what really is the difference?

A.) Drive count vs. Processing/connectivity count. With Nextra, you can have an array biased towards storage or connectivity. With Centera, each node upgrade that you add has the potential to be both storage AND connectivity but requires reconfig at the master cluster level.
B.) Capacity. Nextra is designed to scale to multiple PB’s by introducing 1Tb drives as the baseline storage config. Centera, while currently shipping with 750Gb drive configs, will obviously roadmap to 1Tb drives for 4Tb per node (16Tb per 4 node cluster; that’s raw storage prior to penalties for Centrastar, CPP or CPM, etc). Again, Centera is designed from a clustering standpoint so, feasibly, a multiple petabyte cluster is with reason, provided the infrastructure is there.
C.) Power. No contest really. Centera is designed to function within a very strict “green” envelope and from the hardware perspective is very “performance per watt” oriented. (Granted, I believe that they could eek more performance out of a low power Athlon64 processor while keeping within the same thermal/power guidelines…but, I digress..). Nextra, again by design, fits into a enterprise data center grade power threshhold and, consequentlu, even with using SATA drives, will have much higher power consumption and overhead. If they use spin-down on the disks, then perhaps they can achieve better ratios but if the usuage profile per customer doesn’t fit, they’ve mitigated it’s advantages.

Anyhow, I’ll probably revise this list as we move along here, but…I just wanted this to be food for thought.

cheers,

Dave

Share

{ 1 comment }

I’m Back!!!

by Dave Graham on February 5, 2008


So, I was reminded today that I’ve been woefully remiss in actually blogging according to my insanely optimistic schedule from last year (September, iirc). The thing that jogged my memory was seeing Chuck Hollis enter the meeting that I was attending today with EMC’s Inside Sales – America group. To that end, I’ve ripped through my favourite blogs today (see my blogroll below) and will hopefully start posting again with some “meaty” posts…

ta,

Dave

Powered by ScribeFire.

Share

{ 0 comments }