https://slotsdad.com/ - casino online slots

“Storage is an Anachronism in the Modern Data Center” and Other Shrewd Observations

by Administrator on May 27, 2014

Twitter is such a rich fount of provocation for opinion pieces and blogs.  Someone tweeted this AM the observation quoted in the headline for this post — that storage has become an anachronism in the modern data center.  Since reading it,  I haven’t been able to get it out of my mind.

Truth is, I certainly find myself defending storage a lot these days — at conferences and trade shows and in the press.  Technically speaking, I find myself defending methods for improving storage rather than defending, say, a box of Seagate disk drives or SSDs.

Here’s where I start.  Whether you like it or not, we produce a lot of data.  Some of the data has a transient value while other data drives our business over time or must be retained for reasons of historical value, legal/regulatory obligation or intellectual property value.  So, we hold on to it.  Where?  On storage infrastructure, of course.

I will be the first to agree that our storage infrastructure is poorly managed, as is the data we store to it.  That combination of mismanaged assets creates the lion’s share of cost in IT, produces the most carbon, and shortens the most careers.  However, it is not the fault of storage that these bad outcomes have become manifest; the fault is almost entirely ours, in how we use storage technology.

 

storage_evolution

 

Were/are SANs a great storage topology?  They might have been, had they ever been delivered to the market.  Today’s SANs are just switched fabrics of serial SCSI-attached devices.  We never even got close to realizing the ENSA vision (Enterprise Network Storage Architecture) from Digital Equipment Corp.  That would have been a real storage network — one with an ISO style network model and a real management layer.

Why wasn’t a real SAN ever delivered?  I have video interviews with some of the ENSA guys who said, when the acquisition was completed and they went to work at Compaq, the product line bosses there were having none of that ENSA stuff.  They were sure that once you created an open storage network, the Chinese would sweep the market with EVA-like kit, with lots of extra value-add software encrusting their array controllers.

So, SANs were simply never delivered to market.  Instead, we were pushed by the vendors into a switched, but direct-attached, storage model.  The switch merely made and broke direct attached links at high speed.  A major deficit was the lack of a management layer that, while it would have made it possible to unplug EMC and plug in HDS or IBM on a dime, would also have made the infrastructure more resilient.

While I am not defending the kind of SAN storage that the vendor community eventually provided, I have to admit that it became pretty serviceable as the underlayment for storage virtualization software.  Products like DataCore Software’s SANsymphony V abstracted value add software away from individual kit connected to the SAN and centralized it to enable more efficient sharing to virtual volumes created from aggregated capacity.  DataCore and a few others spearheaded the idea of storage service management, with service customized volumes presented to the workload on the fly.  That was cool — and remains so today.  It would also qualify in my mind as software-defined storage, but not to the purists in hypervisor land, or even IBM.

At the IBM Edge conference last week, they had their annual meet the storage experts session and I was delighted to see Tony Pearson and the other IBM storage gurus in fine voice.  I asked a question about SAN Volume Controller — something like, “are virtual storage volumes with customized value add services software-defined storage?”  I was quickly dismissed:  “IBM supports both virtualized storage and software-defined storage.”  The latter promised coherent management of both services and plumbing “if you use OpenStack.”  It sounded a lot like a company trying to straddle an increasingly widening gulf between storage marketecture and storage architecture.

But I digress.

In addition to SAN, I sometimes find myself having to defend NAS.  Frankly, whenever people use the acronym, I always think “file server.”  Take a PC mainboard, run your favorite RAID software and file system, skinny down some of the general purpose OS functions to create a “thin server OS” optimized for file storage and NFS or CIFS access, add outbound NICs, and shoehorn in an HBA to join a direct attached storage array to the server.  Oh, and don’t forget to front end the whole cobble with a bunch of RAM so you can spoof the fact that the network delivers data a lot faster than the kit can write it.  Voila!  Instant NAS.

Despite the rhetoric of early “innovators,” like Network Appliance, NAS was always just a direct-attached storage file repository accessed through a network server.  Like SAN, NAS was DAS.

That doesn’t mean NetApp was being deceitful.  Early on, NetApp struck me as a good storage company.  They took some of the tech they “borrowed” from an early NAS head maker, threw in some file layout mumbo jumbo and expensive PAM cards, and went out to conquer the world.  I would note to their guys that vendor XYZ had just entered the NAS market with new kit and their response would be, “That’s great!  They are validating our model!”  They were the anti-EMC — the essence of merchantilism (the market is constantly expanding) versus EMC’s kammeralism (the value of the market its fixed and competition is a zero sum game).  However, over time, NetApp became what it beheld:  at last check, they were — in terms of their attitudes at least — just like their old arch nemesis.

So be it.  NAS became a trending storage meme and a lot of folks deployed a lot of little file storage servers.  It wasn’t a very consequential play until the mid 2000s, when the preponderance of corporate data started to be stored as files and managing a filer infrastructure became like surfing the web.  Again, efficiency has been reduced as management has failed to keep up with scaling and as vendors made it difficult to migrate the files trapped on their boxes onto competitor rigs.

That brings us to how anachronistic storage is being addressed today.

Frankly, the current fascination with server-side and VSAN storage topologies seems like a bit of a devolution — that is de-evolution, as in moving backwards.  Call it whatever you want, software-defined storage is the current fan fave, but what we are really talking about is shared direct attached storage again.  The need to abandon the SAN that server-side advocates are pushing has little to do with SANs per se, but rather the diminishing skills of server admins and the dumbing down of storage required by the hypervisor vendors.

In traditional data centers, there was at least somebody (or a couple of somebodies) who actually knew their way around SAN/NAS.  They knew how to troubleshoot the fabric when things broke down and, even with the bare minimum of coherent and universal management utilities, they could keep the plumbing up and running.  The problem with hypervisor computing is that the consolidation of workload onto fewer servers and the capability to move workload (virtual machines) from server A to server B created ramifications for storage and challenges for the storage un-savvy server admin.  Not only did (s)he know precious little about SAN conventions, or the capabilities enabled by storage virtualization, (s)he believed that current storage architecture was too brittle and inflexible requiring applications to be reconfigured after moving from box to box with new storage addressing or routing that would ensure that data got stored to and retrieved from the same fixed location on the SAN.  Truth be told, virtualized storage presents virtual volumes that can move with workload and resolve transparently the routes to the physical devices containing the data for the app.

Despite this fact, the hypervisor vendors succeeded in selling the story that you didn’t need expertise in anything but their software layer to do IT.  In the process, we have lost a lot of storage expertise in our data centers (who needs them?  Storage is anachronistic.  Right?)   And the old way of deploying direct-attached storage behind every server, then replicating the hell out of the data from one DAS rig to another (usually using on-array replication services, which in turn means that you can only buy rigs from the same vendor), has become new again.

I believe that the only reason so many companies are even considering VSAN and the like is because the virtual server admins are too young to remember the last instantiation of this direct-attached model (pre-ENSA, pre-1997) and its myriad problems.

Interestingly, this same lack of historical knowledge is playing into the resurgence of tape technology.  A lot of the kids I talk to today have never used tape, not even to make mix tapes for popular music, so this discussion of BaFe tape with huge capacities sounds like a cool new technology to them, rather than a despised medium used in 1960s Sci-Fi movies.

Bottom line:  hypervisor vendors are dumbing down storage and de-evolving it to meet the requirements of their version of application template cut and paste.  And IQs are dropping sharply in IT, as evidenced by the whole storage is anachronistic thing.

My two centavos.  Let the feeding frenzy commence.

 

Previous post:

Next post: