Once More Into the Fray

by Administrator on September 21, 2011

Just back for Storage Decisions in NYC, where I got to deliver two talks to a big crowd — bigger than last year, possibly reflecting increased need to find economical storage solutions.

I was disturbed to find this article in my inbox, from Arthur Cole at IT BusinessEdge.  Seems like we are right back to the old confusion over what storage virtualization is, and of course the twisting of meaning to fit whatever fits ones preexisting views.

The problem seemed to come down to a rumination over what a “hypervisor” is.  DataCore and others have started calling their storage virtualization functionality “storage hypervisors” and Cole takes exception to this.  Server hypervisors, he argues, are a different animal in that they allow folks to use resources previously unused and possibly not even known to exist.  Storage hypervisors, he spends a lot of ink discussing, do not free up unused resources.

I had dinner on Monday night with a few storage virtualization users who might disagree with Mr. Cole.

FACT:  A lot of storage is being wasted by being allocated as capacity, but not immediately used or even prepared for use (by overlaying the capacity with a file system, for example) by the app administrator.  This has been called “dark storage” — shown as allocated by storage admin tools, but forgotten by the application admin altogether.  That’s unused/unexposed resource in my book.  And a storage hypervisor can reclaim it.

FACT:  Vendors sell gear with installed spare drives or use certain RAID schemes that portend to safeguard our most expensive spinning rust with more of our most expensive spinning rust.  That is unused/unexposed resource in my book.  A storage hypervisor can reclaim it.

FACT:  Some vendors hold back as much as 30% of their formatted capacity (the EMC T-bits versus B-bits thing, but others do it to).  This is space that consumers often know nothing about, since tools are not provided to enable them to see it.  The vendor argues that this is covered by an airy codicil in a warranty and maintenance agreement giving the vendor rights to withhold a portion of capacity for software they have sold the customer on the rig, or that they hope to sell in the future.  One of my clients discovered that a full third of his capacity is being withheld.  Again, unused/unexposed resource.  A storage hypervisor can reclaim it.

If I do a storage hypervisor over these rigs, and perhaps turn off some of the value add BS that the vendors have added, I can reclaim about 18 percent of the capacity of every spindle on average (based on my study of the storage infrastructures of over 3000 companies, large and small).  That is unused resource restored.

What enables me to turn off certain value-add functionality is that it is smarter and more efficient to do these functions at a storage hypervisor layer, where services can be deployed and made available to all disk, not to just one stand bearing a vendor’s three letter acronym on its bezel.  Doesn’t that make sense?

As I will discuss in the next Storage Virtualization for Rock Stars webcast tomorrow, big cost savings in TCO can be realized from virtualizing your storage infrastructure — together with greater efficiencies.  Hope you will register and tune in.

Here is the full invite that went out in email:

Going Triple Platinum: Using Virtualization to Deliver the Full Business Value of Storage Assets

The Storage Virtualization for Rock Stars Webcast Series Continues This Week with Part 5 in the Series

The webcast series that offers a “curriculum” covering storage virtualization continues as we turn to the business value and return on investment (ROI) that storage virtualization delivers. DataCore Software continues its Storage Virtualization for Rock Stars Webcast Series with Part 5: “Going Triple Platinum: Using Virtualization to Deliver the Full Business Value of Storage Assets.”

This is a FREE EVENT in which we will explore the business value case for storage virtualization and DataCore Software will explain how a storage hypervisor delivers real business value. I will serve as host and moderator of this LIVE webcast, which runs from 11:30 AM – 12:30 PM EDT on Thursday, September 22.

In this episode, you’ll learn how storage virtualization software reduces capital (CAPEX) and operating (OPEX) expenses through:

• Cost containment
• Investment protection
• Improved uptime

You’ll also get tips for negotiating better deals on hardware expansion and upgrades using DataCore SANsymphony-V Storage Hypervisor to make storage devices largely interchangeable.

Virtualizing storage is a key driver of storage efficiency. Cost containment value abounds, including:

• Enabling the purchase of less expensive arrays.
• Increasing the options available for hardware sourcing, breaking proprietary vendor lock-ins, and facilitating best-of-breed, purpose-built acquisition models.
• Providing tools for provisioning and re-provisioning resources.
• Providing a platform for deploying new storage services.
• And much, much more…

Bottom-line: Software-based storage virtualization solutions can lower total cost of storage and empower the full business value of virtualization.

Register for Part 5 of the Storage Virtualization for Rock Stars Series today! Go to Registration Page.

Back to the blog,

To make the case, Cole cites a guy named Dan Kusnetzky.  I Googled Dan, who says that DataCore Software’s SANsymphony-V ain’t a “hypervisor” for storage or anything else because “[it does] not provide for a fully independent operating, or in this case storage, environment the way a bare-metal or OS-based hypervisor does.” 

Mr. K has lots of references, sometimes for his own analysis firm, other times for the 451 Group, etc.  He used to work for IDC, and before that — a lot of years ago — at Digital Equipment Corp.   Always in software and always in marketing, it seems.  Never on hardware itself. 

Not sure who he works for these days, but Mr. K seems to have a narrow view of what a hypervisor is.  By his definition, neither Citrix Zen nor Microsoft Hyper-V are hypervisors:  the one true god of hypervisordom is that clusterf**k of microkernels favored by the V-Party.  Their stated direction, per my previous blog, is to add more microkernels that they call a “storage hypervisor” that effectively gut everything from bare metal storage including RAID, LUNs, services and move it into that Jenga!-fied stack that VMware calls a hypervisor.

Pretty narrow definition from a pretty narrow source.

To my way of thinking “hypervisor” is a marketing term with no technical meaning whatsoever.  Vitualization is getting to the same lowly status:  it means almost whatever a vendor wants it to mean.  I think of an abstraction layer.  We abstract away software components from commodity hardware components so that we can be more flexible in the delivery of services provided by software rather than isolating their functionality on specific hardware boxes.  The latter creates islands of functionality, increasing the number of widgets that must be managed and requiring the constant inflation of the labor force required to manage an ever expanding kit.  This is true for servers, for networks and for storage.

If you want to call that abstraction layer a hypervisor, well, just do it.  Call it a cupcake.  Only, VMware would be hard pressed to sell server cupcakes, I suppose.

Can we please get past the BS discussion of what qualifies as a hypervisor in some guy’s opinion and instead focus on how we are going to deal with the reality of cutting budgets by 20% while increasing service levels by 10%.  That, my friends, is the real challenge of our times.

End of Line.  See you at the Webcast.

 

{ 2 comments… read them below or add one }

Peter Martin March 19, 2012 at 12:37 pm

OK, so I’m six months late replying. Sorry. But I share your point of view:

1. There were only two “Virtual” things in the 70s – virtual memory (using disk to augment chip memory – making two different types of memory look like one bigger type of memory) and virtual machines (making one big CPU look like lots of smaller ones). Oh. They mean the opposite of each other. Not a great start.

Now we have hundreds of virtual whatevers, the only common element of meaning I can discern in “Virtual” is “not real”. Doesn’t convey much about the benefits or anything else. And as everything we deal with is virtual , starting from the model of the world we construct in our heads and then reconcile digitally with our senses and working upwards, that doesn’t get us very far ;-)

Virtualisation is, as you say, usually an abstraction layer. And usually an extra layer (if it were part of the OS or the storage device you probably wouldn’t call it anything) Alarm bells! Extra layers after the fact are liable to be kluges, and there will be a trade-off.

We tolerate it because we have a problem to solve, usually because one or more of the layers of stuff we already have isn’t doing its job properly. For example, in the case of servers, because the core OS is surprisingly still designed mainly for running laptops used by one person. In the case of storage, actually… for the same reason. Lots of OSs running independently on lots of machines can’t manage storage efficiently between them. OK, not quite that simple, but….

Who cares? We do seem to be obsessed about putting things in categories. See Dawkins “The Tyranny of the Discontinuous Mind” somewhere or other. Funnily enough, it is probably a kluge in our own software!

Of course, at some point, when no-one can make any money out of hardware any more, part of virtualisation will disappear into that layer, as it has already done with grown-up iron. And the rest will disappear into the OS, which will run the whole data centre. I can dream.

That’s virtualisation done. Let’s move on to “cloud” (shared services anyone) and put that term into the same bin.

Must have got out of the wrong side of the bed this morning.

Peter Martin March 19, 2012 at 12:49 pm

Just had a happy thought. Discussing what virtualisation means is the IT equivalent of…..theology.

Previous post:

Next post: