https://slotsdad.com/ - casino online slots

IBM Edge 2014 On The Radar

by Administrator on April 22, 2014

I am busily preparing for my week-long adventure at IBM Edge 2014 in Las Vegas.  I will be there from May 19 through May 23 mostly attending the show and hanging out at the Social Media Lounge, but also teaching a few sessions at Tech Edge.

One thing I am keen to pursue is Big Blue’s intentions and strategies with respect to the so-called “software-defined data center.”  I have been writing and speaking on this subject in several venues recently and just delivered a piece on the subject to Enterprise Systems Journal.  An extract…

[S]oftware-defined data centers are supposed to be something new and, more importantly, a sea change from traditional data centers.  I have been researching this supposed change for a few months and, slow learner that I am, I don’t see it.

Software-defined servers, essentially a server kit running a hypervisor to achieve a form of abstraction to enable workload to move from box to box as needed for resource optimization, sort of makes sense.  However, this virtualization/abstraction is nowhere near as resilient as mainframe LPAR-based virtualization and multi-tenancy – Big Iron, after all, has had 30 years to work out the bugs in such strategies.  Yet, to hear the SDDC advocates talk, Big Iron is old school and not fresh enough to fit with contemporary concepts of server abstraction.

Then there is the software-defined network.  This notion of separating the control plane from the data plane in network hardware to create a unified controller that simply purposes generic networking boxes to route packets wherever they need to go looks interesting on paper.  However, Cisco Systems just delivered an overdue spoiler by announcing that it wasn’t about to participate in an “open source race to the bottom” as represented by the OpenFlow effort:  network devices should have value-add features on the device itself, not a generic set of services in an open source controller node, according to the San Francisco networking company.  The alternative proposed by Cisco is already being submitted to the Internet Engineering Task Force for adoption as an anti-SDN standard.

Finally, there is software-defined storage.  EMC is claiming this notion as its own thing, even though storage virtualization has been available from companies ranging from DataCore Software’s hardware and hypervisor agnostic software, SANsymphony-V, to IBM, with its hardware-centric SAN Volume Controller, for over a decade and a half.  In its reinvention of the idea to create “software-defined storage” we are told that those other guys are doing it wrong.  SDS is about centralizing storage services, not aggregating storage capacity so it can be parsed out as virtual volumes the way that DataCore and IBM and a few others do it today.  There is no real explanation offered by EMC for why storage virtualization doesn’t qualify as software-defined storage, but clearly the idea doesn’t fit EMC/VMware’s strategy of breaking up SANs in favor of direct-attached storage or with the still evolving VSAN shared direct attached architecture.  With all of the proprietary replication that will be required in a VSAN environment, to facilitate vMotion and HA failover, it would appear that the strategy should sell a lot of hardware.

Bottom line:  the whole software-defined data center thing looks like a house of cards that wouldn’t be able to withstand even the slightest breeze.  So, the idiots who claim that the architecture is highly available and thus obviates the need for continuity planning are pulling our collective leg.

 I will be doing a session at Edge on just this point:  the data protection requirements for burgeoning SDDCs.  And while I am at Edge, I really want to get a clear understanding of what IBM is up to in this space.

On the one hand, they seem to be keen to hold SDDC at arm’s length.  In January, eWeek reported that IBM was trying to sell off its “software-defined network” business unit, and they recently remained mum when Cisco broke ranks with OpenFlow a couple of weeks back.

For the record, I kind of agree with Cisco Systems (despite their patently self-serving play with OpFlex and their whole Application Centric Infrastructure alternative to OpenFlow) to the extent that OpenFlow SDN may well be “a race to the bottom.”  It seems to me that generic network functionality could be segregated from commodity networking devices and placed into a common service controller, assuming that the controller itself is scalable.  That said, Cisco’s point is also well taken:  they do “add value” to commodity functionality and charging a premium for that value add is the foundation for their revenues.  Without the incentive to develop service-quality-improving technology that can be blended with commodity hardware and sold at a substantial profit, Cisco will simply cease to exist (as will Juniper and others) and we will all be doomed to living within currently-defined-and-commoditized concepts of networking.  I don’t defend price gouging, of course, but I have yet to see a company create new technology without some expectation of profiting from their efforts on the back end.

Anyway, with all of the hemming and hawing over north and southbound APIs, and with all of the difficulties even defining what terms mean, I see the entire SDDC thing as an accident waiting to happen.  In the storage realm, idiots are fighting over whether software-defined storage includes storage virtualization or not.  Says one writer/pundit/analyst/blogger who shall not be named, “Storage virtualization products like IBM SVC and DataCore Software’s SS-V, aggregate capacity and serve it up as virtual volumes.  SDS doesn’t aggregate capacity, it only aggregates functionality (thin provisioning, de-duplication, replication, mirroring, snapshot, etc.).”

My question is why?  And who says so?   Why not aggregate and parse out capacity in addition to services?  Isn’t this what resource pooling — one of the three foundational components of software-defined infrastructure: abstraction, pooling, automation (remember VMware? These were in your slide deck!) — was supposed to enable?

Aggregating capacity and providing the means to deliver up virtual volumes makes all kinds of sense.  It is non-disruptive, for one thing and is certainly more efficient than physical volume allocation.  It most certainly obviates the need to do idiot infrastructure re-engineering stuff as is required by VMware with VSAN.  This insight is what made the recent Twitter-based “battle of the witless” between Chuck Hollis, EMC blogger, and Storage Switzerland on this point so amusing and so disheartening at the same time.

Bottom line:  EMC likes the SDS-is-different-than-storage-virtualization argument because it helps to sell more hardware, right?  But anyone who points that out, says Hollis, is not an independent analyst, but rather part of a vast anti-VIPR conspiracy.  Hmm…

As you can probably tell, I am trying to figure out the correct policy position around SDDC and I keep getting blitzed by the politics.  I like the idea of making data centers more agile, responsive and dynamic.  Hell, who doesn’t?  But to accomplish this goal really doesn’t require a myopic focus on virtualization, but rather a laser focus on the thing most ignored in this entire tempest in a teapot: management.

Two threads I will develop in my preso for IBM Edge:

First, I will refer everyone to some great work by IBM scientists in this paper on Quantifying Resiliency of IaaS Clouds (http://mdslab.unime.it/documents/IBM_Duke_Cloud_Resiliency.pdf).  This is a remarkable bit of scientific analysis that shows the complexity involved in delivering any set of resources and services to a business – whether from a traditional or some sort of newfangled data center.  It gets to the heart of a huge vulnerability in software-defined, the neglect of vendors for really fundamental requirements for data center service delivery – management and administrative processes.

resiliency

This becomes clear against the backdrop of NIST’s definition of a IaaS cloud.  Here is my illustration based on NIST docs…

NIST IaaS

As you can see, delivering IaaS or SDDC (same thing, IMHO) you need more than virtualized resource pools and a means to orchestrate their allocation and de-allocation, you also need a management layer, an operations layer and a service delivery layer, according to NIST.

Now, is it just me or do these additional requirements seem to suggest that service delivery is just as complex and daunting in a SDDC as they are in a traditional data center?  And given the myriad things that can go wrong to interrupt services in either, how can the software-defined crowd suggest that Disaster Recovery/Business Continuity Planning are no longer necessary in the HA environment of a Software-Defined Data Center.  I will be first in line to call bullshoot on that idea.

My only concern is that I got an email today for the download of a paper or something, sponsored by IBM, and suggesting that SoftLayer was eliminating the need for DR.  I don’t believe that data is any safer in a cloud – especially when tape isn’t provided by the cloud service.  But that is another story.

See you in Vegas.  Registration of IBM Edge 2014 is HERE.

Edge 2014
Post Scriptum:  I have been advised that a version of this blog has been picked up by the Storage Community, for which I am grateful.  The disclaimer has been added that it is a compensated post, which I suppose it is.  I am picking up a check while at Edge for live tweeting and otherwise supporting the show with text blogs, video blogs and tweets from event venues and the Social Media Lounge.  I didn’t think this needed to be stated, but there it is.

My thought is that such a disclaimer on this post was unnecessary.  What I wrote in the paragraphs above were my own views and not approved or even cleared in advance with IBM.  But, the FTC is the FTC, I guess.  Read more about their guidelines HERE.

Previous post:

Next post: