n Parts 1-4 of the Future Storage Systems articles, we focused on the SAN-facing technologies that would enable scalable propcessing growth, purpose-built technologies for deduplication and encryption, as well as the fabric that would tie nodes together. However, in each of these articles, I never got into WHERE that information would eventually be stored. Today, I’m hoping to remedy that problem. I’ll be referencing the diagram below as usual.
There are a couple of basic things to observe about this layout. First, the topology is decidedly generic compared to the archtypical backend bus architecture that most storage systems use. Again, not necessarily trying to be vague here but I’m assuming that I’ll want to dive in deeper on that technology in a “Part 6b.” 🙂 Secondly, you’ll notice the presense of “fibre channel” with a question mark. Fibre, as we know it, really has reached the peak of its usage as a drive technology. As a fabric, interconnect technologies such as FCoE are still using Fibre Channel as a encapsulation protocol and I don’t see that changing any time soon. Without getting too deep into that conversation, let’s take a look at what the FSS will support.
Backend Connectivity: The Backbone of Information Management
The first item you should notice on the FSS diagram above is the connectivity between the I/O Complex (aka “nodes”) and the backend disk storage. As stated previously, the single line drawn between the two portions simply represents an abstracted connection, not the link count or topology. The second item to notice is the relative absence of fibre channel as a disk topology outside of 3.5″ disk. The reasons are very simple: there are no immediate designs available for next generation fibre channel disk (i.e. 8Gb/s fibre channel drives) from any manufacturers. This is not to say there won’t be 8gb/s disk; on the contrary, there very well may be disk of that nature coming. I just don’t see the applicability of fibre as a connectivity medium for disk lasting long term.
Let’s take a look at several of the FSS backend storage dimensions in detail.
By “physical” I mean the actual physical layout and topology of the expansion chassis and disk. Each of the disk enclosures can be broken out by type of disk and expansion room. For example, due to physical size, 3.5″ drive enclosures are limited to 3U in size. Obviously, you could use smaller 2U enclosures (reference the AX4-5 with 12 disk enclosures) and the obvious scale is greater than the 3U chassis with 15-16 drives (36 drives in 6U vs. 30). I think this comes down to engineering preference more than anything else.
Moving from the 3.5″ drive form factor to the 2.5″ form factor, you gain even greater scale. In a typical 2U disk enclosure, you can fit up to 24 drives (most of the designs I’ve seen out there are using this basic design). Consider the scale: 72 drives in 2.5″ form factor in 6U vs. 30-36 in the 3.5″ form factor. The same holds true for Solid State Disks (SSDs, EFDs, etc.) in these enclosures. Massive scale in minimum footprint.
Expansion and Performance:
Expansion is one of those gray areas for storage vendors. In a good design, you’d minimize bandwidth loss (FC-AL designs) and having to do a complete loop circuit from enclosure 1 to enclosure X. It just makes sense. Practically, this works itself out in a couple of ways: use high bandwidth point-to-point switching within the enclosures themselves (FC-SW) or, figure out a way to tie enclosure X to a specific node (keeping the loops simple). Both of the ideas have merits and, in trying to pick one type of expansion over another, you’ve got to toss the performance metric into the ring. Practically, an internal switched design in the enclosure is the way to go from a performance standpoint. However, at the “ends,” you’re going to be limited by your interconnect design. Being able to take the massive RAID group bandwidth and channel it right back to the processing node requires a interconnect (like FCoE or, better yet, Infiniband) that has low latency and high bandwidth capabilities. Companies like Isilon have already implemented that sort of technology in their arrays and, while their business model hasn’t been successful, the technology end is decidedly interesting.
I’ve kept this general for a reason. Each of the disk technologies underlying the FSS require their own subset article and I’ll take the time to discuss those in the future. Especially pertinent is the emergence of SSDs (Solid State Disks) as powerful alternatives to mechanical disk drives. In any case, if you have any questions about what you’ve read here, please let me know.
Related articles by Zemanta
- Goodbye physical Fibre Channel
- Storage still a priority for IT despite economic problems
- Vendors hype new Fibre Channel over Ethernet tech
- EMC to support emerging FCoE technology
- Cisco’s Nexus forms core of data center drive
- NHS Trust thinks ahead with storage upgrade
- Brocade unfurls FCoE roadmap
- QLogic hits road with InfiniBand
- The five biggest storage trends
- Brocade deal to help drive data center transition
- Hardware vendors primed for FCoE love-in
- iSCSI: Game over
- iSCSI dodges Fibre Channel over Ethernet noose
- Protection and performance at SNW
- Opinion: Fibre Channel’s savior may have its own problems
- SNWSpotlight: 8G FC and FCoE, Solid State Storage
- Virtualized PCIe switch
- LSI showcases new HPC storage system
- QLogic dodges recession in second quarter
- QLogic hypes ‘network consolidation’ with FCoE