Atmos Foundations: Hardware

by dave on November 11, 2008


Yesterday, I took a look at Atmos from a software standpoint.  While Atmos is truly a “software” solution, there is an element of hardware to examine as well.  Truth be told, it’s not as interesting as the core Atmos offerings but, there are some notables.  We’ll start with a quick look at 3 basic configurations for Atmos.

3 Sample Atmos configs

3 Sample Atmos configs

In reviewing the diagram above, there are several things to note.  The hardware supporting Atmos is based on servers (running the Atmos software), switches (Gigabit or 10GbE), and Storage (SAS/SATA).  We’ll break each of these categories down into specific components a bit later.  Additionally, the “model #’s” (as it were) are simply pre-packaged configurations at this point.  As the Atmos software continues to propagate the market, you can expect that different configurations will be put together.  Some configurations might be smaller and some might be larger.

Servers

Not suprisingly, Atmos runs on clustered servers designed to handle injest and processing.  The MDS (Metadata Services), MDLS (MetaData Location Services), JS (Job Services), and PM (Policy Manager) components of Atmos are very much bound to processing power within the Atmos host systems.  Figure it this way: these nodes have to calculate objects, hashes, meta-data records, etc. and have them ready for quick retrieval.  Thus, its not suprising to see that the cluster is based on 1U Dell servers with a decent amount of memory and processing power.  As a side note, these servers are simply validated solutions and are not representative of ultimate customer configurations.  If you don’t like Dell (and there may be a few of you), it’s reasonable to expect that EMC will validate HP or IBM server solutions to run the Atmos software.

Switches

Nothing too fancy here.  Bog-standard GigE or 10GbE switches to handle the inter-node communication as well as infrastructure-facing connectivity.  Where bandwidth will matter (large VOD files, perhaps?  Blu-ray level content?), 10GbE is an obvious choice (as is a big pipe between locations, but I digress).  For lighter duties, GigE should suffice just as well.

Storage

The storage component, while innately boring to most, is where the real excitement of the Atmos hardware platform lives.  You may be looking at me sideways by now, but seriously, seeing our Ultrapoint DAE’s using SAS channels portends something… 🙂  In any case, the Atmos hardware uses dedicated x4 SAS channels for approximately 12Gb/s of bandwidth per path.  If the LCCs in the DAE’s support SAS expanders, there’s a good chance that you’re seeing that full bandwidth from the host to the target and back.

Another notable is the presence of “stacked” DAEs (2 x 3U space) that effectively doubles the given density of storage within a rack.  This design is purposefully implemented to ensure proper front to back airflow as well as thermal dissapation.  While perhaps not as elegant as a 48 disk 4U chassis, it does make servicing the underlying disk very easy.  Cabling (power and data) are included into the overall design to ensure snagless operation.

Closing thoughts:

In my mind (and based on conversations with the EMC Atmos team), it’s very important to understand that Atmos, while able to exist in a single data center, really starts to show benefits in multiples.  Being able to leverage the Atmos policy software sitting above generic (as I’ve made abundantly clear) hardware with your application sets, really creates a powerful data management system.  Inherent data replication facilities that can push/pull content and objects+meta based on your design really speak to the power that Atmos can bring.  Hardware is just hardware, frankly, and Atmos is the brain sitting behind this operation.

Like I stated before, fundamentally, the hardware isn’t too exciting, but there are certain things that point to perhaps future innovations at EMC.  Stay tuned…the future is about to get more interesting.

Reblog this post [with Zemanta]
Share
  • Hey Dave,

    Thanks for your post. A quick clarification. It should be stresses that since the Atmos group was driven by customer requirements, any future server vendor decisions will be based on that. So we've not made any commitments beyond our current hardware configurations.

    Thanks again for a great post.

    Clive

  • dave_graham

    Clive,

    completely understood. I think that historically, EMC has tried to follow customer requirements. I think one of the biggest steps for the cluster side of the hardware platform would be blade servers. That way you could increase cluster availability while decreasing (depending on the solution) the overall RU footprint. does that make sense?

    Personally, (and you know this all too well) I'm excited to see what and where Atmos goes. Hardware is just hardware at the end of the day. 😉

    cheers,

    Dave Graham

  • Hey Dave,

    Thanks for your post. A quick clarification. It should be stresses that since the Atmos group was driven by customer requirements, any future server vendor decisions will be based on that. So we've not made any commitments beyond our current hardware configurations.

    Thanks again for a great post.

    Clivefair to say that hardware requirements choice will be

  • Clive,

    completely understood. I think that historically, EMC has tried to follow customer requirements. I think one of the biggest steps for the cluster side of the hardware platform would be blade servers. That way you could increase cluster availability while decreasing (depending on the solution) the overall RU footprint. does that make sense?

    Personally, (and you know this all too well) I'm excited to see what and where Atmos goes. Hardware is just hardware at the end of the day. 😉

    cheers,

    Dave Graham

  • Pingback: Recommended reading about EMC Atmos — Storage Soup()

  • Pingback: Atmos Interest.. « blog.virtualtacit.com()

  • Pingback: EMC Changes the Rules with Atmos Compute – Gestalt IT()