I’m typically in the position of designing storage solutions for external customers based upon established protocols and best practices here at EMC. We design based on our work with Microsoft, Oracle, SAP, etc. and the solutions development that happens with these partners. It’s definitely an exciting challenge to take a customer’s needs and wrap them into a solution set that allows for performance, capacity, and future growth. What’s even more exciting, however, is getting the chance to design and implement a storage solution for an internal “customer.” I figured this would be a good chance to lay out the work that goes on in designing a storage solution coupled with a VMware host solution as well.
I’ll break this journey into several parts and include the appropriate Visios (as permitted). So, without further ado, let’s get started!!!
Part 1: Understanding the Project and its Phases
This project’s genesis was motivated by an aging storage environment with no available disaster recovery procedures and processes as well as isolated hosts that had poor resource utilization as well as limited DAS storage. The challenges to this initial environment can be listed as follows:
a.) Bringing technological standards to operational levels will require extensive research, architecting, and involvement from the team as well as from allocated resources.
b.) Cost factors prevent a complete BC/DR overhaul. As such, phased approaches to technology enablement must be utilized.
These listed challenges highlight the obstacles that will need to be overcome. As such, designing a project plan and a timeline around resources, implementations, etc. will help keep the project focused and on schedule.
So, based on the project “problem statement,” I divided the solution set into three distinct phases: Remediation, Expansion, and Completion. We’ll be looking at Remediation today.
Phase 1: Remediation: When what we got ain’t working right…
Phase 1 is broken out as follows:
1.) Phase 1 will seek to examine and alleviate current infrastructure issues, including:
i.) Server platform usage
ii.) Operating System/Application usage
iii.) Data protection (both host-level & application level)
iv.) Storage system updates (administration + performance)
v.) Replication for data protection (host level protection)
vi.) Rebuild of OS/Application layers + additional “spare” OS/Application images for load
I’ll touch on the first two points today and the others as I continue to roll out the project.
Server Platform Usage really points to what is being utilized from a hardware perspective. It does tie into the next point (Operating System/Application usage) but really attempts to answer the following questions:
- What hardware do I have in place? (processors, RAM, etc.)
- What connectivity do I have in place? (IP, Fibre?)
Basic questions, to be sure, but absolutely essential when you look at moving from a bare metal host to a virtualized platform.
Operating System/Application usage speaks to what is done on the hosts currently. In my case, the servers are using combination of Windows 2003 Standard and Enterprise (32 bit) as well as SQL 2000 Enterprise (32 bit). There’s no standardization between the two systems as well as different software loads between the two. The standard questions I ask for this are (list is not exhaustive):
- What applications do you use?
- How are those application used? Reporting? DSS-style workloads?
- What are some of the issues that you’re having with the current platform?
In addition to these questions, I’ll always request some level of Performance Monitor statistics (usually 24hrs or greater depending on the utilization cycle) as well as grabbing a host report (hardware and driver report; system state) to note any outstanding driver issues, etc.
That’s the update for today. I’ll post some architectural documents in the next set of posts.