Posts Tagged ‘EMC’

Is DeBeers the Next EMC?

Thursday, October 27th, 2016

I received a call a couple of days ago from a New York Times writer on deadline asking what I thought about the breakthrough in storage being hailed by some researchers at the City University of New York.  They were talking about storing data in diamonds.  Crappy diamonds with a lot of imperfections.  But diamonds still.

HERE is the NYT article.  I suppose I was a bit more curmudgeonly than usual.  After all, I am working with clients who are desperately trying to find practical solutions to coping with the data deluge…and the data apocalypse that analysts place just down the road in 2020.  It would be great if diamond storage, or DNA storage, or carbon nanotubes, or that tech we have been awaiting since the Kennedy administration — holographic storage — would be commercially viable in time to handle the 60 zettabytes of data we will need to find a way to store in just a couple of years.  Sadly, I think all of these technologies will come too late to store all the bits and pixels.

Right now, two key ingredients for weathering the zettabyte apocalypse are tape — with Barium Ferrite coatings, LTO has already been demonstrated by IBM and Fujifilm to have a capacity with current technology of 220TB per cartridge uncompressed —  and cognitive data management — which involves the automation of things like intelligent tiering and data management keyed to data value and compliance requirements.

I have a workshop scheduled for January with Virtualization Review on the latter topic and I will be talking tape (again) at CA World in a couple of weeks.  Hope to see some of you there.

Money Talks. BS Walks.

Monday, July 20th, 2015

datacoreboxingposterThat is a pretty straightforward summary of the webinar I am involved in this week with DataCore Software.  Fighting the Hidden Costs of Data Storage is the official title (though my original suggestions were a bit more flamboyant, shall we say) and our goal is to help folks to understand (1) why storage costs so dang much, (2) why there is nothing about software-defined or hyperconverged storage that is automatically going to bend the cost curve, and (3) what you can do about cost containment with the grey matter between your ears and some software goodness from an independent software developer, DataCore Software.

As you can see at left, I dreamed up some creative for the event, including a mock fight announcement poster and a deck theme based on boxing.  But, I have been told not to adopt too pugilistic a tone — we are all so PC now.  So, don’t expect me to say something like, “The only industry with the same product cost dynamic as storage is cocaine:  the more the vendor ‘cuts’ the product the more money he can make.”  And don’t expect me to diss Evil Machine Corporation or its peers:  let’s face it, VMware is engaged in a much grander scheme of consumer lock-in than EMC ever conceived.

But I won’t say any of those things.  We will look at the elements that contribute cost to storage, which today accounts for between $.33 and $.70 of every dollar spent on IT hardware, depending on the analyst you consult.  Here is my simple calculus for storage TCO (I wrote a paper about it that I think everyone who attends the DataCore event can download at the show)…

tcocalc

The good news is that, whether I get to rant or not, DataCore promises to bring some real cost-containment value to storage infrastructure, as it has demonstrated repeatedly in every one of the tens of thousands of companies where it is used (including mine).  It is a pleasure to do this presentation with them and I hope some of you will register to attend.  Who knows, you might even get a few ideas for containing your storage costs.  The show starts on Tuesday (tomorrow) 21 July at 10AM ET.  Be there!

msmelee

Live. From The Jazz at Lincoln Center in New York. It’s the z13 Mainframe!

Saturday, January 17th, 2015

I shot a bit of video of the internals of the z13 Mainframe when I attended the launch party that IBM convened at the Jazz at Lincoln Center in New York City.  Here is what can best be called a few beauty passes for all the gear heads out there.

Note:  I couldn’t get anyone from IBM to tell me whether the kit is tricked out with blue neon lights like EMC storage gear.  If so, it will make a very attractive display at corporate cocktail parties.

Hope you enjoy it.  More information to come.

VSAN in a Nutshell

Friday, June 13th, 2014

storage_evolutionIf you agree that shared storage is better than isolated islands of storage, that some sort of virtualized SAN beats the socks off of server-side DAS, that hypervisor and storage hardware agnosticism beats hypervisor or hardware lock-in’s, that aggregated storage capacity AND aggregated storage services make for better resource allocation than service aggregation alone, and that a pure software company in Ft. Lauderdale, FL is less predatory and more inclusionary than a hypervisor vendor owned 87% by a storage hardware peddler, then VSAN is a de-evolution of storage and not an advance.

There.  I said it.

Iz a Puzzlement…

Friday, April 25th, 2014

I always loved the King and I musical, the one with Yul Brenner as the King of Siam.  I liked the scenes when he was cogitating on some imponderable, like why President Lincoln had no elephants in his army.

king_and_i_yul_brynner_small“Iz a puzzlement.”

I am feeling the same way as vendor after vendor reports flat line growth in their sales of external storage products (arrays, that is).  On 4/24, Storage Newsletter reported that EMC storage reventues were down 22% Q/Q, 3% Y/Y.  This followed a 4/17 report from the same source of a “catastrophic” drop in IBM storage hardware revenues, off 23% Q/Q.

Why EMC’s numbers are not catastrophic at 22% decline, while IBM’s are considered such (at 23%) is a point of some puzzlement for me.  IBM is a much more diversified company than EMC, so I would think that a drop in storage sales here is likely made up for by a hike in Big Data analytics software or mainframes or whatever…  But Mr. Maleval calls them like he sees them.

Anyway, I want to match up these reports and others like El Reg’s reportage on sagging storage sales in networked storage and the story about Wikibon (carried in April by the Register) pronouncing the death of enterprise storage in favor of little server-side cobbles like VSAN.  I am left with a big question.

What are companies doing about the explosive data growth that IDC and Gartner were pegging just last year at 300% to 650% per year in highly server virtualized environments.  How/where are we storing all of those bits?

Iz a puzzlement.

Data doesn’t store itself.  We don’t manage either data or storage very well, so we definitely aren’t making the best use we can out of the assets we have.  Last year, IBM’s Storage Czar said on stage that everyone deploys too much Tier One storage and that had to change before it bankrupts the IT budgets of companies.  Last year, storage clouds claimed a huge uptake in storage capacity, but closer inspection revealed that this capacity was mostly being consumed by compute cloud operators, not by “users.”

If I read the back and forth correctly between Chuck Hollis and George Crump (see previous post), the VSAN thing is representative of the problem created by VMware in terms of capacity demand growth.  While VSAN can be deployed as a mirrored pair of storage servers, Howard Marks and others correctly point out that an HA configuration really needs three VSAN toys.  So, that accounts for IDC’s 300% capacity growth curve from last Summer.  Gartner said server virtualization was jacking up capacity demand by 650%, presumably calculating in not only replicated data but also the multiple backup copies and the additional replication that is made behind any server that might possibly be tasked to host a give workload.

Why is VSAN necessary — other than to solve a stupid VMware-caused problem, that is?  Iz a puzzlement.

Why not just virtualize your existing storage (no, not “software-define” it — which some purists insist does not include the aggregation of capacity or its presentation as virtual volumes, but actually virtualize it) and associate virtual volumes with your VMs.  That way, when the VM moves from server A to server B, you don’t need to fix the application’s back end connections to its storage from each new perch.  The virtualized storage provider will do that for you.  That’s how I do it with DataCore Software and it is presumably how I would do it with an IBM SVC.

Iz a puzzlement.

I would love for real world operators to tell me how they are provisioning storage to the increasing storage burden if they aren’t buying new capacity.  Are we throwing our mission critical data onto big SATA hard disks inside our servers or in DAS chassis?  How do you handle the vendor lock-ins that constrain your ability to replicate data from one brand name box to another?  What kind of bandwidth consumption are you seeing in lots of DAS to DAS replication processes?  How do you test intelligently whether you have all of the data you need to failover or to host a VM on a specific physical server?

See how the questions build up when you begin with the premise that data is growing exponentially, but storage sales are flat?

Iz a puzzlement.

I’d really like to get IBM’s take on this when I am at IBM Edge 2014 from May19 through May 23.  Hope to see some of you there.

Edge 2014

 

Note:  The questions above are in no way vetted, approved or otherwise monitored by Big Blue.  However, I will be supporting their social media efforts and delivering five training sessions at Edge, activities for which I am being compensated.  This disclaimer is required by the FTC.

 

Stirring the…er…Stuff

Thursday, April 24th, 2014

I can’t get an exchange out of my head that occurred, like, a month ago while I was traveling abroad.  Of course, I keep Twitter running on my mobile phone so, you know, I can get sticker shock for data services used while traveling in other countries.  That bill came today.  OUCH!

Anyway, while commuting, I saw a post by Storage Switzerland tweeted out that criticized VMware/EMC VSAN in some nuanced ways (he is Swiss after all, so he never has any show stopping gripe with any vendor who might hire his pen in the future).  However, even the slightest prick is enough to awaken the MIGHTY EMC BLOGGER Chuck Hollis, who took umbrage with the Swiss one.  What caught me about the exchange was not the substance (of which there was very little), but rather Mr. Hollis’ strategy for responding to the “insult.”  Wrote Hollis:

One aspect of our industry that I find especially annoying is the “pay-to-say” analyst model.  The usual scenario is that one vendor wants to discredit one or more other vendors to make themselves look better.  They contract with a freelance analyst, who hopefully brings more expertise and the appearance of independence to the table.

The few analysts who use this model fiercely brand themselves “independent”, perhaps in the sense that they are not affiliated with one of the big name industry analyst firms.

What Mr. Hollis seems to be doing to respond to a critique, erred or not, of his preferred technology is to attack the sincerity and independence of the spokesperson — an ad hominem attack, as it were.  GridStore must have paid for the critical statements made by the Swiss.

Truth is, it doesn’t matter who paid whom.  If George and his group were in it just for the money, they probably would have sold their pen to EMC, the firm with the deep pockets at the moment.  Lord knows they have bought and paid for so much pay-to-say from analysts (IDC in particular:  the Exploding Digital Universe papers, for example) many vendors felt locked out of the whole game.

Swiss-guy invited a professional debate over the facts, which Hollis, after a fashion, did provide, but not without significant encouragement in his comments section from both Howard DeepStorageNet and Kelly from GridStore.

Having occasionally incurred the wrath of EMC for my commentary, I am sensitive to these personal attack methods.  IMHO, you don’t win the hearts and minds of the blogosphere or the IT consumer simply by bullying anyone who nay says your products.  Certainly you do not attack pay-to-say when you buy more of it than anyone else.  And when it comes to prevarication, hype and bullshit in technology product discourse, probably shouldn’t point out the spec in the eye of your competitor while ignoring the log in your own.

I could list a bunch of BS products and strategies that EMC has championed over the years, but I’d rather not.  That would be like an ad hominem attack.

Go ahead, attack me now.  It’s okay.  I have a lot of hair to pull out, unlike the Swiss guy.

IBM Edge 2014 On The Radar

Tuesday, April 22nd, 2014

I am busily preparing for my week-long adventure at IBM Edge 2014 in Las Vegas.  I will be there from May 19 through May 23 mostly attending the show and hanging out at the Social Media Lounge, but also teaching a few sessions at Tech Edge.

One thing I am keen to pursue is Big Blue’s intentions and strategies with respect to the so-called “software-defined data center.”  I have been writing and speaking on this subject in several venues recently and just delivered a piece on the subject to Enterprise Systems Journal.  An extract…

[S]oftware-defined data centers are supposed to be something new and, more importantly, a sea change from traditional data centers.  I have been researching this supposed change for a few months and, slow learner that I am, I don’t see it.

Software-defined servers, essentially a server kit running a hypervisor to achieve a form of abstraction to enable workload to move from box to box as needed for resource optimization, sort of makes sense.  However, this virtualization/abstraction is nowhere near as resilient as mainframe LPAR-based virtualization and multi-tenancy – Big Iron, after all, has had 30 years to work out the bugs in such strategies.  Yet, to hear the SDDC advocates talk, Big Iron is old school and not fresh enough to fit with contemporary concepts of server abstraction.

Then there is the software-defined network.  This notion of separating the control plane from the data plane in network hardware to create a unified controller that simply purposes generic networking boxes to route packets wherever they need to go looks interesting on paper.  However, Cisco Systems just delivered an overdue spoiler by announcing that it wasn’t about to participate in an “open source race to the bottom” as represented by the OpenFlow effort:  network devices should have value-add features on the device itself, not a generic set of services in an open source controller node, according to the San Francisco networking company.  The alternative proposed by Cisco is already being submitted to the Internet Engineering Task Force for adoption as an anti-SDN standard.

Finally, there is software-defined storage.  EMC is claiming this notion as its own thing, even though storage virtualization has been available from companies ranging from DataCore Software’s hardware and hypervisor agnostic software, SANsymphony-V, to IBM, with its hardware-centric SAN Volume Controller, for over a decade and a half.  In its reinvention of the idea to create “software-defined storage” we are told that those other guys are doing it wrong.  SDS is about centralizing storage services, not aggregating storage capacity so it can be parsed out as virtual volumes the way that DataCore and IBM and a few others do it today.  There is no real explanation offered by EMC for why storage virtualization doesn’t qualify as software-defined storage, but clearly the idea doesn’t fit EMC/VMware’s strategy of breaking up SANs in favor of direct-attached storage or with the still evolving VSAN shared direct attached architecture.  With all of the proprietary replication that will be required in a VSAN environment, to facilitate vMotion and HA failover, it would appear that the strategy should sell a lot of hardware.

Bottom line:  the whole software-defined data center thing looks like a house of cards that wouldn’t be able to withstand even the slightest breeze.  So, the idiots who claim that the architecture is highly available and thus obviates the need for continuity planning are pulling our collective leg.

 I will be doing a session at Edge on just this point:  the data protection requirements for burgeoning SDDCs.  And while I am at Edge, I really want to get a clear understanding of what IBM is up to in this space.

On the one hand, they seem to be keen to hold SDDC at arm’s length.  In January, eWeek reported that IBM was trying to sell off its “software-defined network” business unit, and they recently remained mum when Cisco broke ranks with OpenFlow a couple of weeks back.

For the record, I kind of agree with Cisco Systems (despite their patently self-serving play with OpFlex and their whole Application Centric Infrastructure alternative to OpenFlow) to the extent that OpenFlow SDN may well be “a race to the bottom.”  It seems to me that generic network functionality could be segregated from commodity networking devices and placed into a common service controller, assuming that the controller itself is scalable.  That said, Cisco’s point is also well taken:  they do “add value” to commodity functionality and charging a premium for that value add is the foundation for their revenues.  Without the incentive to develop service-quality-improving technology that can be blended with commodity hardware and sold at a substantial profit, Cisco will simply cease to exist (as will Juniper and others) and we will all be doomed to living within currently-defined-and-commoditized concepts of networking.  I don’t defend price gouging, of course, but I have yet to see a company create new technology without some expectation of profiting from their efforts on the back end.

Anyway, with all of the hemming and hawing over north and southbound APIs, and with all of the difficulties even defining what terms mean, I see the entire SDDC thing as an accident waiting to happen.  In the storage realm, idiots are fighting over whether software-defined storage includes storage virtualization or not.  Says one writer/pundit/analyst/blogger who shall not be named, “Storage virtualization products like IBM SVC and DataCore Software’s SS-V, aggregate capacity and serve it up as virtual volumes.  SDS doesn’t aggregate capacity, it only aggregates functionality (thin provisioning, de-duplication, replication, mirroring, snapshot, etc.).”

My question is why?  And who says so?   Why not aggregate and parse out capacity in addition to services?  Isn’t this what resource pooling — one of the three foundational components of software-defined infrastructure: abstraction, pooling, automation (remember VMware? These were in your slide deck!) — was supposed to enable?

Aggregating capacity and providing the means to deliver up virtual volumes makes all kinds of sense.  It is non-disruptive, for one thing and is certainly more efficient than physical volume allocation.  It most certainly obviates the need to do idiot infrastructure re-engineering stuff as is required by VMware with VSAN.  This insight is what made the recent Twitter-based “battle of the witless” between Chuck Hollis, EMC blogger, and Storage Switzerland on this point so amusing and so disheartening at the same time.

Bottom line:  EMC likes the SDS-is-different-than-storage-virtualization argument because it helps to sell more hardware, right?  But anyone who points that out, says Hollis, is not an independent analyst, but rather part of a vast anti-VIPR conspiracy.  Hmm…

As you can probably tell, I am trying to figure out the correct policy position around SDDC and I keep getting blitzed by the politics.  I like the idea of making data centers more agile, responsive and dynamic.  Hell, who doesn’t?  But to accomplish this goal really doesn’t require a myopic focus on virtualization, but rather a laser focus on the thing most ignored in this entire tempest in a teapot: management.

Two threads I will develop in my preso for IBM Edge:

First, I will refer everyone to some great work by IBM scientists in this paper on Quantifying Resiliency of IaaS Clouds (http://mdslab.unime.it/documents/IBM_Duke_Cloud_Resiliency.pdf).  This is a remarkable bit of scientific analysis that shows the complexity involved in delivering any set of resources and services to a business – whether from a traditional or some sort of newfangled data center.  It gets to the heart of a huge vulnerability in software-defined, the neglect of vendors for really fundamental requirements for data center service delivery – management and administrative processes.

resiliency

This becomes clear against the backdrop of NIST’s definition of a IaaS cloud.  Here is my illustration based on NIST docs…

NIST IaaS

As you can see, delivering IaaS or SDDC (same thing, IMHO) you need more than virtualized resource pools and a means to orchestrate their allocation and de-allocation, you also need a management layer, an operations layer and a service delivery layer, according to NIST.

Now, is it just me or do these additional requirements seem to suggest that service delivery is just as complex and daunting in a SDDC as they are in a traditional data center?  And given the myriad things that can go wrong to interrupt services in either, how can the software-defined crowd suggest that Disaster Recovery/Business Continuity Planning are no longer necessary in the HA environment of a Software-Defined Data Center.  I will be first in line to call bullshoot on that idea.

My only concern is that I got an email today for the download of a paper or something, sponsored by IBM, and suggesting that SoftLayer was eliminating the need for DR.  I don’t believe that data is any safer in a cloud – especially when tape isn’t provided by the cloud service.  But that is another story.

See you in Vegas.  Registration of IBM Edge 2014 is HERE.

Edge 2014
Post Scriptum:  I have been advised that a version of this blog has been picked up by the Storage Community, for which I am grateful.  The disclaimer has been added that it is a compensated post, which I suppose it is.  I am picking up a check while at Edge for live tweeting and otherwise supporting the show with text blogs, video blogs and tweets from event venues and the Social Media Lounge.  I didn’t think this needed to be stated, but there it is.

My thought is that such a disclaimer on this post was unnecessary.  What I wrote in the paragraphs above were my own views and not approved or even cleared in advance with IBM.  But, the FTC is the FTC, I guess.  Read more about their guidelines HERE.

Signal and Noise

Wednesday, March 6th, 2013

While I wasn’t in the room to hear the comment directly, IBM’s new storage chief, Ambuj Goyal, is reported by The Register to have stated that his objective was to move transaction storage away from disk to all-flash arrays.  Ultimately, he envisions an IBM that sells less storage.

Later in the article, he clarifies that he isn’t even suggesting flash-assisted disk or hybrid arrays, but all-flash only — probably leveraging Texas Memory Systems (recently acquired by Big Blue) RamSans.

I have my doubts about the readiness of Flash for anything like the heavy lifting of big transaction systems, especially given the memory wear problem that vendors choose (not) to deal with by simply adding a ton of additional memory to substitute in whenever a cell fails and a group of cells is marked as bad.  One credit card company told me recently that his CC processing systems, which handle over 1 million card swipe transactions per second, would burn out Flash SSDs within a few minutes of installation given the write limits that currently exist in SLC and MLC memories.

Of course, last time I checked, TMS was also building SSDs out of DRAM. DRAM SSDs don’t have the memory wear issues of Flash, but DRAM memory is volatile, while Flash is not.  And DRAM rigs tend to be substantially more expensive per GB, not that anyone ever accused IBM of being the low cost leader.

Goyal’s other talking points didn’t raise my hackles.  He thinks that storage virtualization (a la San Volume Controller or SVC) should be deployed to enable non-disruptive deployments of  storage and applications that use it.  I like storage virtualization too.

He also hangs his hat on IBM’s burgeoning Virtual Storage Center, a management console.  I have to agree that management is the missing link in efficient storage and I am delighted that IBM is developing yet another tool set for managing storage.  But is it a storage service management play or a storage resource management play or both?  And does it work with all storage gear, with all IBM storage gear, or only with select IBM rigs?   I hope to learn more about this at Edge 2013 to see whether it is ready for prime time.

I like IBM, having taken my earliest training in IT at IBM schools.  And I have known some top notch engineers from Big Blue over the last three decades.  I am not sure the Omni et Flash thing makes any more sense to me than Omni et Orbis (everything on disk) mantra that the array makers have been preaching for the last 20 years.

I would like to learn how they justify this direction.  Or are we all just chasing the goofy folks behind Evil Machine Corp’s XtremeIO announcement.

Guys, if you think going really fast for a really short time is cool, why not buy a top fuel drag racer?

 

Oh, that’s right.  Top fuel dragsters have a tendency to blow up.

Oh well, I’m starting to flash back to Ed “Big Daddy” Roth and his Rat Fink illustrations from my youth.  But I have long since set aside such childish things.

 

Just Back from SHARE 2013 in San Francisco

Wednesday, February 6th, 2013

I was the guest of Tributary Systems, great folks offering an appliance called Storage Director that is providing a drop in data protection service engine in SystemZ, iSeries, and distributed computing environments.  Wrote a paper that you can get from their site.  While you are there, you can replay a webcast I did for them before the show that covers roughly the same turf as the talk I delivered at SHARE.

Thanks to Ed Ahl, of Tributary Systems, for having me out.

 

Couple of quick observations.

First, nobody seemed to be basking in the blue neon glow of the VMAX from Evil Machine Corp.  Not sure if this changed over the course of the night, but the couple of times I walked past their booth, they seemed to be chatting with each other.  I did manage to snap this quick phone photo.  Lousy quality, I’m afraid, and it doesn’t show the Windows Server 2008 R2 boot screen — but the blue neon is awesome.

 

Maybe IBM should trick out its latest DS with blue neon lights or something, to make it look trippier…

 

Or maybe they should go to the folks who are designing the cabinets for the latest System Z:

 

I really like the look of this rig, though what really matters is what Big Blue has going on inside the Z…

 

The cabinet reminds me of some of the sets used in the original Star Trek television series.  But there is nothing old school about this platform or about the cobble IBM is encouraging between the mainframe, and open systems workload.  The zEnterprise play has been keeping me interested…

Here is the latest z BladeCenter as an extension to the z System mainframe.

 

The z BladeCenter extension provides a great hosting environment for workload that you don’t choose to virtualize inside an LPAR of your mainframe.

I’ve said before that I am not enamored of the SNMP-based management of the BladeCenter extension, preferring as I do the rock solid direct management built into zOS for the components of its own kit, or at a minimum, REST.  Still, it seems like a nice reunification strategy for the MF and distributed environments.

At the show, I learned that IBM has probably given away more zEnterprise solutions than it has sold — priming the pump so to speak.  At the same time, I ran into a couple of customers who had bought and deployed the technology themselves, or who were preparing to.  They won’t talk about it in the media because they regard the strategy as a secret weapon that affords them business advantage over competitors.

Interesting.

Wise and Foolish Builders

Tuesday, October 30th, 2012

Far be it from me to advance religious doctrine.  After all, I am a perfect Catholic – which means that I really don’t need to practice much anymore.

However, watching the devastation brought by Frankenstorm Sandy last night and today, I found myself recalling some sage wisdom from the Good Book.  Specifically, Matthew 7:24-27:

24 “Therefore everyone who hears these words of mine and puts them into practice is like a wise man who built his house on the rock. 25 The rain came down, the streams rose, and the winds blew and beat against that house; yet it did not fall, because it had its foundation on the rock. 26 But everyone who hears these words of mine and does not put them into practice is like a foolish man who built his house on sand. 27 The rain came down, the streams rose, and the winds blew and beat against that house, and it fell with a great crash.”

This advice, it occurs to me, is not just relevant to homebuilding, but to building just about anything, including IT.

Replace “sand” with “server hypervisor” or “cloud” and you have a sense of what I am getting to.  I have encountered so many folks in this business who have bemoaned the way that senior managers have bought into the woo peddled around VMware, clouds, etc. and have placed firms on a path to an expensive and potentially devastating debacle.

The hypervisor and cloud peddlers offer that hardware commoditization naturally encourages the abstraction of software away from hardware – which may be very true in the long term.  However, the enabling technologies, IMHO, are not nearly as robust and developed to provide a “rock solid” alternative to hardware platforms.

I recently ranted about a guy (I think from VMware or EMC – same thing, I guess) who offered that we should not be judging clouds by what they are today, but rather by what they will evolve into.  Fair enough, I suppose, but also a good reason not to entrust them with my mission critical workload.

I also find a certain irony in the fact that a few weeks ago, while I was in NYC doing a Storage Decisions event, some attendees told me that Disaster Recovery Planning had been defunded at their firms, that VMware/EMC woo about DR being trumped by high availability features of VMware and disk-to-disk WAN based replication on MPLS had resonated with senior management.  I have to wonder how all of that is working out today.

The only thing that saved the financial industry post 9-11 was the fact that data centers supporting those firms affected in the Twin Towers were mostly across the river in NJ – out of harm’s way.  This storm had a much broader footprint, and a lot of NJ data centers supporting a lot of firms in NY, Connecticut and elsewhere were directly affected.  Huff Post went off line because of its data center in NJ being taken down.

Think about these things as you rebuild your operations after Sandy.  Let’s build our next IT house on a rock, not on sand.