March 2008

More on the SAS vs. Fibre debate

by Dave Graham on March 25, 2008


Connectivity Reliability

At some point, I had typed in a bit about the physical interfaces present on both the SAS and Fibre drives. I appears that I ran roughshod over that particular point which, upon thinking about it, is a very important dimension of drive reliability.

As noted previously, SAS drives use an amended SATA data+power connectivity schema. Instead of a notch between the data and power connections as present on SATA drives, SAS drives simply “bridge” that gap with an extra helping of plastic. This not only turns the somewhat flimsy SATA connectors into a more robust solution, it also requires that the host connector support that bridging. Interesting note here is that the SAS host connector supports SATA drives but SATA host connectors will not support SAS. This is somewhat assuaged by various host implementations (i.e. using a SAS connector on a backplane with discrete SATA data connectivity from the backplane to the mainboard) but generally, this is the rule. The SAS drives feature a male connectivity block which is mated to a female SAS connectivity block on the host system. Pretty basic stuff.

Fibre drives, on the other hand, use a SCA (single connector edge) medium that is again male on the drive side and female on the host side. Definitely more simplistic in design and implementation (and is featured within all current EMC arrays) and honestly, when push comes to shove, something I would trust inherently more with protection. The same idea is present with SCA80 Ultra320 SCSI drives as well. The fitment here is definitely more secure with less design stress placed on the physical connector (and thusly the PCB itself) than with the SAS solution.

There are always caveats with distinct designs, however, and I’d like to highlight some below.
a.) The SAS data+power connector is inherently MORE secure than the standard SATA interface. Truth be told, I’ve broken SATA data connectors. It’s really not hard since the data connection is a discrete “tab” from the power interface (which I’ve broken as well). The addition of the plastic “bridge” between data and power connections on SAS drives promotes a stronger bond between the connector (whether that be SFF or backplane based) and the drive itself. It also keeps folks from mistakenly connecting SAS drives to SATA ports. ;)
b.) The SAS interface is still prone to breakage as compared to SCA40/80 connections. There’s a reason why we do a conversion within our drive caddies from SATA to Fibre (outside of the obvious protocol translation and sniffer obligations): it’s more secure. The mating mechanism within the SCA interface provides no single point of stress on the connector as there is a nesting process that takes place. Not so with the SAS interface: you have a significant protrusion into the caddy area that, if improperly aligned, can cause damage. If you misalign the SCA interface, you can’t make the connection and there’s no protrusion difficulties.

Note: The good news in all of this (at least from my perspective @ EMC) is that we’re not going to allow you to screw this connectivity up. ;) We mount the drives in our carriers, put them in the array and, well, we’ve got you covered. ;)

In any case, this is really for further clarification from yesterday’s post. Hopefully that will give a little more food for thought.

Technorati Tags: , , , , , , , , , , , , , , , , ,

Share

{ 0 comments }


SAS vs. Fibre

One thing I hear about constantly (within the hallowed halls of EMC and elsewhere), is the general “inferiority” of SAS drives vs. Fibre. This usually comes complete with a somewhat stale argument that because SAS is a natural extension of SATA, it is therefore a “consumer” drive and not “good enough” for the Commercial or Enterprise disk space.

Really?

What most people fail to realize is the following:
a.) The platters, drive motors, heads, etc. are the same. If people actually spent the time looking into these products (vs. cutting at them with a wide swath of generalized foolishness), they’d actually see that the same mechanical “bits” make up both the “enterprise” class fibre and SAS drives. Looking at the Seagate Cheetah 15k.5 drive line, we see that they’re offered in SCA-40 (Fibre), SAS, SCA-80 (u320), and 68pin interfaces. The spec sheet shows that outside of differing transfer rates (and, lower power draws at load/idle than Fibre), both the SCSI and SAS drives are the same.
b.) The primary differentiators are the PCBs, ASICS, Physical Connectors to the “host” system, and Transfer Rates. Flipping the drives over, you’ll obviously note the differences in PCBs, onboard ASICs, and physical connectors. That’s a wash as it has little to nothing to do with reliability. So, what you’re left with is the transfer rate conundrum. Honestly, given how particularly bad customers are at actually filling a 4 gigabit per second pipe with data (esp. in the commercial side of the house), a 1 gigabit per second difference (roughly 100mb/s) is minimal. Oh, for the record, our STEC SSDs will only have a 2gb/s connection to the Symmetrix, last I heard. ;)

I think those two points about cover it. ;) MTBF, etc. are the exact same, btw, so, don’t expect any differences from hardware longevity.

Seagate and SSDs: WE SUE YOU!

Engadget is one of my favourite reads during the day and consequently, I need to blog about articles located there more often. That being said, I almost fell out of my seat this morning when I read one of the latest postings: “Seagate warns it might sue SSD makers for patent infringement.” Yippee. In my opinion, this is more of the same “sue happy” nitwitery (is that a word?) that happens every single time Apple decides to release a new “product.” Some Rip van Winkle patent hound comes out of years of slumber and states “I patented the EXACT same technology using specious language and vague intimations of what I thought could work” much to the chagrin of everyone around. Now, in the case of Seagate and Western Digital, I believe that they’re just looking to diversify their holdings in the emerging SSD market. Remember, at the price per gigabyte/terabyte mark, spinning disk is still the king and will be for quite some time. However, in terms of power draw and raw IOPs, you can’t beat them. In any case, file this whole article under the “we want in (and the money wouldn’t be a bad thing either)” category. ;)

EDIT: (4/10/08 @ 1103pm EST) New entry added above on the Seagate vs. STEC lawsuit

Sun: We’re going optical (with LASER BEAMS!!!!)

Next on the hitlist is the re-emergence of Optical interconnects between processors as noted by Sun (and it’s recent DARPA grant). Great news for Sun, really, but IBM has already been doing this for some time. Optical interconnects ARE the wave of the future for processor interconnects, etc. especially as quantum computing (and it’s massive data loads) are concerned. Definitely something to pay attention to. Who knows? Maybe EMC will use optical transmission in its Symmetrix line between the blades. ;) A boy can hope.

That’s all for now.

Peace,

Dave

Technorati Tags: , , , , , , , , , , , , , , , , ,

Share

{ 9 comments }

Piggy-back concepts of “Greening” the datacenter

by Dave Graham on March 21, 2008


I’ve had a LOT of fun, lately, reading Mark Lewis’ blog (found here) as he delves into the green data center concepts. To rehash some of what has already been talked about to “green” your data center:

a.) Tier your storage. Higher speed spindles, by nature, consume more power. (Compare the specs for the Seagate Barracuda ES.2 Enterprise SATA drive to those of the Seagate Cheetah 15K.5 FC/SAS drives). By moving your data from higher speed spindles to lower speed spindles based on usage/access patterns within a larger system policy framework, you can keep power consumption low overall. Better yet, archive it off to a Centera and remove the need for tiering within the array to begin with. ;)
b.) Virtualize, Virtualize, Virtualize. Sure, it’s the “trendy” thing to do these days but, with the ability to collapse 30:1 (physical to virtual) in some cases, simply investing in VMWare (of course) will cut down on your power footprint and requirements. From the host side, using devices like Tyan’s EXCELLENT Transport GT28 (B2935) with AMD’s quad core Opteron processors allow for rack dense ESX clusters to be created that can scale to (get ready for it): 160 physical sockets/640 cores per 40U rack and 320 Gigabit Ethernet ports. I also forgot to mention that within these 1Us, you can install low profile 2port Qlogic QLE2462 4GB/s fibre cards to allow for multi-protocol attached storage to be used. *hint, hint* I think this would be a GREAT platform for the next EMC Celerra. ;)
c.) Use different storage media. By “different storage media,” I am referring to the availability of SLC/MLC flash drives and the pervasive use of 2.5″ fibre/SAS drives within the data center. I’ve already waxed eloquent before on the merits of using 2.5″ drives (lower power consumption, less moving parts, typically faster access times than comparable 3.5″ drives, etc.) and I’m anxiously waiting to see if EMC will adopt these drives for their arrays. With 2.5″ drives coming close in platter densities (500gb 2.5″ SATA drives are already available in the market), I think there is less of a reason to continue to use 3.5″ drives for nearline storage. Flash, on the other hand, while available in smaller quantities, takes the speed and power equation to a whole different level. I’ll let the Storage Anarchist explain the details:

“As you’ve probably read by now, the STEC ZeusIOPS drives themselves are in fact optimized for random AND sequential I/O patterns, unlike the lower cost flash drives aimed at the laptop market. They use a generously sized SDRAM cache to improve sequential read performance and to delay and coalesce writes. They implement a massively parallel internal infrastructure that simultaneously reads (or writes) a small
amount of data from a large number of Flash chips concurrently to overcome the inherent Flash latencies. Every write is remapped to a different bank of Flash as part of the wear leveling, and they employ a few other tricks that I’ve been told I can’t disclose to maximize write performance. They employ multi-bit EDC (Error Detection) and ECC (Error Correction) and bad-block remapping into reserved capacity of the drives. And yes, they have sufficient internal backup power to destage pending writes (and the mapping tables) to persistent storage
in the event of a total power failure.”

In any case, these are some quick notes from me this AM. Definitely am looking forward to delving into the Tyan GT28/AMD Quad Core stuff in the next few days.

Happy friday!

Technorati Tags: , , , , , , , , , , , , , , , , , , , ,

Share

{ 0 comments }

On Iomega (and other musings)

March 20, 2008

So, for due diligence purposes, I’m going to remind you to read that little disclaimer stuck in the upper right hand corner of this blog. Since that little bit is over with, let’s get on with the rest of this blog.
DailyTech – EMC Walks Back to Iomega With Revised Offer for Acquisition
If you read [...]

Share
<br />