FCoE: “Soft” FCoE Integration

by dave on July 2, 2009


There appears to be quite the interesting discussion going on over at Scott Lowe‘s blog regarding FCoE (Fibre Channel over Ethernet) as its relationship to the data center, VMware, and, pretty much everything else. I love watching people put their thoughts to eletronic ink (via the comments section) but recently, I’ve had some pretty interesting conversations around FCoE that focus on different approaches to FCoE within the data center.

It all started with a phone call…

I have a habit of wanting to talk on the phone more than emailing. There’s the obvious human connection (of course) but, it allows me to pontificate without leaving anything tangible behind (thus limiting my ignorance to the ether versus hard copy 😉 ). In any case, I received a phone call from Randy Bias (formerly of GoGrid) late one even and he wanted to chat about FCoE. Now, I’m not quite the fabric technologist that my friend Stu Miniman is but I can generally “talk the line.” One of Randy’s principle questions was around using “soft” FCoE technologies like OpenFCoE to grab the frames and use the physical system interface (NICs and PHYs) as simple passthroughs to the CPU for frame decode/encode operations. The rest of what followed was similar in vein so, we can skip that for now. In processing through this conversation, I was able to sit down today and ruminate a bit more about this idea.

Creating a “soft” FCoE system

One of the primary reasons for utilizing FCoE CNAs (converged networking adapters) in systems is to offload the I/O from the central processing system within a given host or array. By offloading this processing, you are (of course) freeing up system resources to be dedicated to programs and operating system processes. This is especially pertinent in cases where virtualization is in use because system resources become pooled (potentially) amongst many different operating systems and programmatic requirements (e.g. multiple SQL, Exchange, Oracle instances across a single physical box). However, for single systems where resources are spread a bit thicker, there’s no real impetus to look at a hardware solution (provided several basic resources are in place). So, what does this “soft” FCoE system look like?

Basic Requirements: Hardware

Given that today’s CPUs are increasing in physical (and virtual) core counts, actually utilizing those resources to their fullest extent becomes rather tricky. What used to require a scaled multi-processor system consisting of multiple physical cores can now be accomplished with rather rudimentary single physical CPU systems using several logical processors. The implications for doing FCoE offloading within the system versus the CNA based on this configuration are enormous. For example, on my current Dell R610 in the lab, I have two Intel Xeon 5520 processors, each with 4 logical cpus. Each of those cpus can process 2 distinct threads (aka Hyperthreading) thus giving me a synthetic logical CPU count of 16 cores. Now, depending how I’ve set things up (in my case I’m virtualizing via VMware’s ESX3i), I will have MANY of those CPUs idle at any given point. So, one of the contention points for offloading is removed via that level of CPU count.

Another point of process has been the system bus and its bandwidth relative to the data and system traffic traversing it. With the advent of PCIe (PCI Express) the amount of data that could be passed on a single expansion slot increased dramatically. With a PCIe Gen 2.0 x4 slot able to pass approximately 4Gbytes/s of data, the requirement to handle 10GbE, FCoE, and iSCSI is easily eclipsed (10GbE roughly passes around 900Mb/s of data…). While bandwidth sufficiency is great, it is meaningless if the system bus cannot effectively hand-off the data to be processed. Standard northbridge solutions have historically handled this bottle-necking but as data I/O becomes even more rapid in scope, inevitably this hand-off process will start queuing and choking the system performance. In recognition of this, both Intel and AMD are looking to integrate key components of the northbridge into the physical CPU die to allow for more direct I/O handling and processing. (Note: AMD, amongst current platform providers, simply has the more robust offering here using Hypertransport and the physical interface layer HTX).

Another key area worth looking at would be the memory subsystem. Again, what used to be bottlenecked by the northbridge (I’m talking about anything pre-Nehalem on the Intel side) has now been absolved by running the memory controller directly on the physical CPU die. This un-core as it is commonly referred to makes use of directly connected links to the memory banks, thus ensuring a healthy amount of memory to cpu bandwidth in addition to lower access times to pages.

Finally, you have the actual physical interface to the network. This can either be copper (common) or optical (rare from a system perspective). This obviously provides the connectivity to your switch gear as well as the rest of your environment. Backending this physical port is the PHY which handles the basic operations of the protocol you’re supporting. By layering a software stack over this simplistic circuit, you can support different types of protocols.

Basic Requirements: Software

There’s a basic assumption that Linux (whichever distro you choose) is closer to God’s purpose for I/O. To that end, my focus is here is to promote the idea that a base system running some level of a 2.6.29 kernel (where Open-FCoE support is baked in) will be most advantageous. My personal preference would be something along the line of Ubuntu’s LTS server packages but simply put, it’d be easy enough to do via one of the other robust distributions.

Concluding Thoughts
In all and through all, I think the choice of whether or not to embrace FCoE is a decision that is not without weight. Obviously, there is cost tied to moving an entire infrastructure and their could be use cases where FCoE isn’t the best fit. Additionally, the choice as to whether to go with a “soft” FCoE implementation versus a hardwired implementation using CNAs isn’t always as clear. Hopefully some of the thoughts I’ve put down here will help you make decisions that are prudent to your business as well as your infrastruture.

For more information on Open-FCoE, I recommend that you go to the Open-FCoE page located here.

Reblog this post [with Zemanta]
Share
  • Pingback: Twitted by garethspence()

  • Pingback: preview night » Blog Archive » Cisco VPN client for Tiger released!()

  • N.Sendu

    Hi,

    Nice to read your weblog !!

    I am sendu. A recruiter from E*Pro .

    I am working for a recruitment firm and do recruitment for various IT and other domain in US . I can help you in getting a better job in a good US firm. People with great ambition and aspiration with respect to their career can forward your resume to : [email protected].

    Warm Regards,
    N.sendu
    E*Pro.

  • N.Sendu

    Hi,

    Nice to read your weblog !!

    I am sendu. A recruiter from E*Pro .

    I am working for a recruitment firm and do recruitment for various IT and other domain in US . I can help you in getting a better job in a good US firm. People with great ambition and aspiration with respect to their career can forward your resume to : [email protected].

    Warm Regards,
    N.sendu
    E*Pro.

  • Pingback: Materials: Carbon Nanotubes « ECI Blog()

  • Karl Smith

    Some articles on how to choose the best option according to each company’s needs can be found in www.bell.ca/enterprise/EntPrd_Inf_Landing.page