A Vision of Failure

by dave on March 10, 2009

I’m probably the last person in the world you’d ever expect to write a post titled “A Vision of Failure” but, perhaps that’s more due to my inability these days to provide anyone with a sense of hopefulness regarding where cloud computing and the like are headed.  Visions, much like prophecies, exist to denote the “what ifs” not necessarily the “what will be” standards approaching on the horizon.  A lot is based on conjecture, upon the gross emotions that dictate where we feel the market should be; truthfully, this is the base platform upon which analysts spend their days calling out some meaningless drivel that hopes to represent the market at large.  At the end of this cattle call, however, lies the very real possibility that “we got it all wrong.”  I’m not one to claim that cataclysmic events will occur based on this, but reality checks are needed. So, what’s this Vision of Failure?

The “New Toy” Phenomena

Everyone loves a new toy to play with.  The initial feelings of happiness, of pleasure, of meaning quickly are elevated when you get something “new,” whether that be virtual or physical.  The same emotions are tied to business; what you WANT to happen is a necessary predicate of what you BELIEVE will happen.  Thus, WANTING cloud computing to succeed is predicated on the BELIEF that it is the answer to the datacenter failures that have plagued us since the inception of computing time.  However, perception never outstrips reality though it is masked by it for a day, a month, a year, a decade.  One need only look at the “evolution” of virtualization from the 70’s to the present to see how “everything old becomes new again.”  So, how does this lead to Failure?

Early Adopter ‘R Us

Jokes always are made about early adopters simply because their seemingly reckless abandon to a new product/concept is viewed to be so narrow.  Those who are overly cautious (pessimistic, perhaps?) sit back and let others fall on the sword as a necessity.  When the dust of this process clears, what is left is usually a small shell of what existed before, albeit a hardened approach that bears significant weight to bear as a “proven” design.  This design can then be incorporated as the framework or cornerstone of future development, with each significant addition receiving a fair amount of shakedown at the hands of, you guessed it, early adopters.  Along each step of the way, failure is created as a byproduct of necessity.  Startups, SMBs, appliances, connectors, middleware, PaaS, IaaS, SaaS, etc. all fail for lack of applicability and flexibility.  If failure isn’t there, acquisition becomes a very real reality with intrinsic absorption and very real “failure to thrive” as part of the grafting process.  We see this most readily apparent in the acquisition of Coghead by SAP.   When your business model depends on the kindness of strangers, you need to re-evaluate your entire reason for existence.

Risks and Rewards: Trivializing the Argument for/against

Make no mistake: the risk/rewards ratio within the cloud computing space are very real.  Your model thrives when people want to be connected; your model fails when people realize there are better/faster/cheaper methods of doing things.  You’ve only to turn on the television and look at the failing business models highlighted on the evening news to understand this. GM’s failure was doing too much of the same thing and they starved the innovation model.  AIG’s failure was  promiscuous financial behaviour and they starved the sustainability model.  The “as a Service” (aaS) is risky and innovation slight, if only because the players in the middle are squirming under the thumb of capitalization and financing.  What I’m proposing here is a shift away from the failures that will become the “aaS” providers and their limited resources and move towards something a bit more  sedate and less spectacular: private clouds.

I know that there are arguments for/against public and private clouds with neither side being truly persuasive in their reasoning.  It stands to reason, then, that there has to be some compelling reason to shift one way or the other.  Amazon wouldn’t have succeeded in their EC2/CloudFront/S3 model had there been no underlying technology that promoted the idea that they were “new, exciting, and useful.”  Under the covers, however, there is nothing new:  commodity hardware driving a virtualized software/storage layer that contains the data and harnesses scalable resources.  This type of elastic functionality has been present in several forms for a while now.  What really drives this home, however, is how YOU can avoid having to pay others for infrastructure and  functionality that you’ve already got in your DC…and this is no mean feat.

The Full Circle

When looking at your resources within the datacenter, the question has to be asked: what am I under or over utilizing today that lends itself to risk?  If you consider that your file and application servers (unless significantly specialized) are nothing more than legacy x86 platforms attempting to do their jobs with fixed and inflexible resources, you should probably be asking yourself how to streamline your operational efficiency.  Step one has always been to consolidate servers and eliminate or minimize your footprint in power/cooling.  By reclaiming this space using virtualization technologies (which are inherently more useful than baremetal OSs) that actually provide better utilization (and performance per kilowatt) than your legacy OS platforms, you’re looking at a optimized environment.  We obviously can accomplish this in several ways (VMware VDC-OS (ESX 3.5/4.0), for example) in software and as a further step, in hardware.  Intel and AMD have provided optimizations via their processing logic to accelerate virtualization processes and routines (AMD-V, RVI, for example)  that allow for greater functionality and enhanced processing for your workloads.  From a storage perspective, using EMC NAS (CIFS/NFS emulation) allows you to decrease your physical server footprint for menial tasks like fileserving, thus freeing you from licensing and additional infrastructure costs.  Coupling the host side to the storage side, you again open up the idea that converging fabrics to a single carrier will simply and reduce infrastructure expenditures (a la the Cisco CEE/DCE vision).  Funny enough, you’re now running a private cloud and are not dependent on anything external to YOUR business.  The rise and fall of the aaS layer is inconsequential since you own it; you’re not beholden to anything external except what you choose to expose yourself to.

It’s a simplistic vision that, if implemented and understood correctly, promotes the concept of operational efficiency while using resources that are already at your fingertips.  Extending this model to an external site simply means understanding the glue that connects your infrastructures and acting accordingly.  There is no need to reshape, repurpose, recreate those jobs and processes that have already been crafted by time and experience.  There are no 3rd party applications that need to “translate” your data for you; each site acts as a node of a complete solution.  Data replication is understood to be global and is baseline, not optional.  Each application that is added to a site is understood to have global applicability and global reach within your business (permissions understood as a process here); literally a “pluggable” application model.

Concluding Failure

I’ve outlined where I think failure is inevitable and where recovery is possible.  Am I right? Who’s to say? Am I wrong? Time will be my judge.  What I hope each of you can take away from this is the idea that you don’t have to tie yourself to what the cloud pretends to be.  Rather, you can create the cloud using what is already present and accounted for: it just requires YOU.

Enhanced by Zemanta