- casino online slots

Let’s Talk Storage Economics

by Administrator on August 6, 2015

money-poured-in-toilet Do you ever get the feeling — maybe as you sit down to work on your quarterly, semi-annual or yearly budget — that you are flushing your storage budget down the proverbial loo?

Truth is that disk drives are not doing the same price-drop-by-50%-per-GB-every-12-months thing that they have done since the mid 1980s.  The flash guys are pulling our legs by using dedupe ratios to deflate the cost per GB of their rigs, which is still pretty fracking high.  And hardly anyone wants to talk about tape technology, despite its vastly superior cost metrics compared with other media.  Too old school.

So, a lot of vendors have taken the discussion of storage economics up a notch, to look at overall infrastructure cost of ownership rather than component pricing — hoping, I suppose, that the new crowd of IT practitioners are a bit more business savvy than their predecessors.  Still, there is a lot of prevarication and BS coming out of too many vendors.  Software-defined storage may indeed reduce CAPEX costs, but it only reduces OPEX (the cost to administer storage) if we have excellent management and a capability to place data on infrastructure where it is best hosted given rates of re-reference and modification.  All of those advocates of flat, mindless, multi-node storage, directly attached to each server node in a virtualized HA cluster aren’t really selling better OPEX or CAPEX, just a different way of shuffling the same old deck of cards.

How do you do more than rearrange the deck chairs on the Titanic?  Simple.  Virtualize your storage infrastructure.  Use what you have, plus whatever you plan to buy, to its fullest.  Centralize the management of capacity and hardware resources as well as the value-add services so you can allocate and de-allocate both — quickly and efficiently — in response to changing requirements.  I had a great chat with George Teixeira, CEO at DataCore Software, about storage economics.  When I finished up with edits and sent them for review and approval, he went ahead and started posting them on HIS blog.

Here is the whole set for anyone who wants to see them here (I know, DataCore’s website is a lot prettier)… We begin with George telling me about the history of DataCore’s storage software and how it presaged the appearance of what is now called software-defined storage…

Here is part 2 of the interview…  George talks about the various ways that storage virtualization functionality was pitched to customers over the past 17 years, drawing from other popular memes to help prospects to understand the technology without needing for them to become storage engineers in the process.

And part 3…  George explains the differences between software-defined storage the way that hypervisor vendors define it and the capabilities of a robust SDS stack that includes capacity virtualization…


In part 4, George breaks down the real economic value that should accrue to software-defined storage, but doesn’t in too many cases.  He begins to outline what savvy storage planners should consider to help bend the storage cost curve…


In part 5, George talks about the rise of hyperconverged storage appliances.  Will they determine which vendor wins the software-defined storage wars?


Concluding the presentation, George talks about storage OPEX costs and what really needs to be done to contain them.  OPEX costs are hardly ever as well defined as CAPEX spend, but customers, according to DataCore surveys, are getting a lot more savvy.  Good observations from a guy who has been in the business long enough to have seen it all…


Thanks to George Teixeira and to DataCore for allowing me the time to conduct this interview.  I also want to invite folks to listen and watch a webinar replay that I did with DataCore recently around the topic of storage economics.  It is available on demand HERE.  It was great fun doing this one with the DataCore folks.




Previous post:

Next post: