Getting My Presos…and Luggage…Together

by Administrator on March 9, 2015

1434293-1344x840-[DesktopNexus_com]Looks like a crazy travel schedule coming up.  Tomorrow:  I am visiting Ft. Lauderdale to conduct some video interviews with George Teixeira and Ziya Aral, my friends of years and the bosses of DataCore Software.  I am looking forward to getting their take on software-defined storage and virtual SAN and to catching up with all of my friends in SANsymphony land.

The following week, I head over to Paris to chat with my DataCore friends there, including Pascal Lecunff and Said Boukhizou.  Three days on the ground seems way too brief a period of time to be visiting Paris in the Spring.  Ah well.  Work.  C’est la vie!

Mid April will find me in London for my 2015 Disaster Recovery Seminar with TechTarget, then over in Las Vegas to chat with tape mavens at a Fujifilm Channel Partner event.  With the LTO folks jump-starting their cartridge capacity, and the huge and growing data burgeon in most shops today, I see a bright future ahead for that technology.

The London seminar is summarized as follows:

BUSINESS CONTINUITY IN A NON-STOP WORLD: AN ASSESSMENT OF CURRENT CAPABILITIES AND TECHNIQUES [or… WHY TRADITIONAL METHODS MIGHT NOT BE ENOUGH]

ABSTRACT: Despite the hype around software-defined and cloud-centric data centers, the need to preserve and protect data and to ensure the continuity of key business processes against unplanned interruption events has never been greater. In this two part seminar, DR/BCP expert Jon Toigo addresses the current capabilities and techniques for data protection and business continuity to identify both the limitations and the promise of the latest disaster prevention and disaster recovery tools and memes.

PART 1: THE DATA IS THE THING: PROTECT THE RIGHT STUFF WITH RIGHT TOOLS

This session will examine the core problem of disaster recovery and business continuity: the protection, preservation, and privacy of data itself. As traditional backup vendors struggle to reinvent their data copy approaches to protection, and as security vendors seek to find ways to mitigate the risk of unauthorized access to and disclosure/corruption of data assets, the volume of data being created is increasing at an unprecedented rate. The case can be made that simplistic strategies such as array mirroring or nightly tape backup are increasingly non-viable, at least for mission critical data. A bigger issue is understanding what data is mission critical in order to avoid applying prohibitively expensive and inefficient one-size-fits-all data protection and security schemes to the entirety of the storage junk drawer. Separating irreplaceable information assets from data that supports less important processes is a challenge, but there are numerous technologies – from data management suites to object storage approaches – that seem to be gaining mindshare. Only after data is properly classed can we effectively provide appropriate protection and security services.

Public clouds are increasingly embraced as a stop gap for distributed enterprises, but service providers too must have processes in place that have been audited and tested for their adequacy and suitability to purpose. Bottom line: just fielding multi-nodal storage clusters – virtual SANs – is not enough to protect your most irreplaceable data assets.

PART 2: PROTECTING VIRTUAL AND PHYSICAL PLANT

This session looks at the strategies and technologies that are currently being promoted by the industry to enable operational continuity in the wake of an interruption event. HA clustering has moved down the stack from the application hosting server to the software-defined storage infrastructure, at least in vendor “chalk talks.” This session will feature a “sanity check” on what new infrastructure models portend for real world disaster avoidance and for disaster survivability when the worst case scenario occurs. Innovative approaches for consolidating infrastructure components for ease of management and monitoring — from IBM’s latest System z offering to the software defined data center models promoted my various distributed computing leaders — will be discussed. The intersection of client mobility and cloud computing will also be discussed from the standpoint of the capabilities they enable and the limitations they impose on continuity planners.

 

The London arm of TechTarget puts on a nice event.  I am looking forward to working with them again.  When they send me registration details, I will post them here.

First week of May, I will be the guest of StarWind Software at Microsoft’s Ignite conference in Chicago.  The following week will see me in Las Vegas again, this time at the Venetian, where I will be delivering five TechEdge sessions at IBM Edge 2015.

Lots of travel coming up and a lot of writing and webinars to get done in the mean time.  Hope to see some of you during my travels.

{ 0 comments }

Software-Defined…well…Everything

by Administrator on March 9, 2015

softwaredefinedsnakeoil It seems like most of the storage companies I am encountering these days are falling in line with the software-defined aka hyper-converged aka virtual SAN aka direct-attached-to-clustered-servers trope.  Not that all of these products or technologies are all the same, mind you, but everyone seems on board with the idea of pulling value-add functionality off of array controllers and re-instantiating it in a software layer running on server heads.  I see this, for the most part, as a deconstruction of the storage array — a change I can believe in.

But it does beg the question of what functions need to be done close to the storage device and which can be abstracted away into a software layer.  I did a video interview with Clod Barerra, IBM’s Storage Savant, at IBM InterConnect that I look forward to posting here just as soon as it is approved and released by Big Blue.  We had an interesting conversation about functionality that needs to be delivered in close proximity to controllers and what could be stacked up as a software app, what XIV originally was.

I am also busily looking at PernixData, which is clustering together server DRAM and Flash to create sort of a universal memory cache & buffer that looks kind of interesting and that they are calling software-defined storage.  And I am doing some writing and webinars sponsored by other SDS purveyors, including StarWind Software and StorMagic — both SDS solutions with significant differences from one another, but sharing the brag of being more cost-effective and efficient than the SDS stacks proffered by VMware and Microsoft.  And I am waiting for a copy of the new SUSE Linux SDS stack to review when it hits the streets shortly.

I should note that StarWind Software has agreed to sit for a Brown Bag Webinar on IT-SENSE.org in about 30 days.  Here is the registration, if you want to get it on your calendar now.

 

sdsstoragebbl

 

As always, the webinar will feature my take for the first 15 minutes, then a 15 minute interview session (“Between Two Ferns” style) with StarWind’s Anatoly Vilchinsky, followed by an open Q&A session for attendees that will also last 15 minutes.  It is your basic lunch time brown bag event and will be available for replay forever, or until I stop paying for my BrightTalk Channel.

I put a lot of effort into the cover slide, reacquainting myself with Photoshop every time we do a webinar at IT-SENSE!  Here are some of the base arts that I hope you will find amusing…

snakeoilbottles

softwaredefinedsnakeoil5

snake4

softwaredefinedsnakeoil3

softwaredefinedsnakeoil2

softwaredefinedsnakeoil
By the way, I am not throwing the whole concept of software-defined storage out the window, just critiquing the capabilities of the current crop of technology from the brand name server hypervisor vendors, some of whom seem to know very little about storage.  I am busily identifying the limitations of the server hypervisor SDS kit and exploring alternatives from among the independent software developer communities.

Tomorrow, I fly down to Ft. Lauderdale to discuss the differences between virtual SAN and SAN virtualization with the folks at DataCore Software.  I am also working to understand more about Caringo’s SWARM technology, which can also be legitimately regarded as software-defined storage.

Stay tuned.

{ 0 comments }

Gearing Up for IBM Interconnect Day 2

by Administrator on February 24, 2015

I have been reviewing sessions today at the IBM Interconnect show in Las Vegas and have ID’d one called The New z13: Redefining Digital Business through the Integration of Cloud, Mobile and Analytics that I will attend in about an hour.  This might be a bit of a rehash of the presentations I saw at the z13 launch, which were outstanding.  I will know that a few minutes after the event begins. But, at a minimum, the session will allow me to rest my brain from all the software-defined storage stuff that has been mulling around between my ears since yesterday.

Again, to restate, I think IBM is doing pretty well thus far navigating the BS that seems to cluster around software-defined storage.  I wish everyone would just admit that SDS is rarely a proper response to virtualized application performance issues.  App performance problems in x86 hypervisors are more often the result of application code or hypervisor code or a mix of the two than of anything related to storage infrastructure.  This is obvious when you have no extraordinary storage queue depths on the virtual host.  Just because your hypervisor vendor recommends ripping and replacing your “legacy” storage to fix your VMs doesn’t mean that it addresses the root cause of the problem.

IBM to its credit is not making such a claim or reinforcing such a view.  App performance doesn’t even enter into the rationale I heard yesterday for SDS the IBM way, using Spectrum Accelerator.

It bothers me that Big Blue has elected to stand up its product inside a VMware host, though I guess that makes sense given the current market dominance of the EMC “mini-me.”  I suspect that they will expand the hypervisor compatibility at some point.

From where I’m sitting, the best SDS architecture is one that is neither hardware dependent nor hypervisor dependent.  Going forward, enterprises will likely have multiple hypervisors and some apps that aren’t virtualized at all.  What we DON’T need is a bunch of isolated islands of storage, each one dedicated to a particular hypervisor stack, plus some good old fashioned raw storage to manage as well.  From what I can glean, most virtualization admins have extremely limited storage skills to begin with.  I could be unkind and say that they don’t know storage from shinola as a rule (always wondered what the Shinola folks thought of that expression).

Anyway, the SDS craze is driving me crazy.  I have to get my mojo back, so it is off to the zSystem preso for me.  And here is my very first selfie.  I took it yesterday at the IBM Interconnect Expo.  The smart looking thing in the background is the IBM z System z13 Mainframe.

20150223_162147[1]

{ 0 comments }

IBM Interconnect Update Day 1

by Administrator on February 23, 2015

IBM Interconnect logo artBeing one of the older folks at the IBM Interconnect show, I need to confess to being a bit dazed and confused.  This show is definitely a mix of older technology norms and newer memes.  The opening keynotes were as good as any I have seen at a show of this type — a good mix of vendor thought leadership and illustrative presentations by customers.  Of course, it helps when your customers are the likes of the Mayo Clinic and Airbus.

Basically, we kept being told that everything was changing and that we old codgers needed to get on the bandwagon.  Mobile computing has already arrived.  Innovative companies are using it to get up close and personal with their customers.  For the first time, I think during a presentation this morning by Citi, I had an epiphany:  what everyone is saying is that the world has gotten faster and less personal.  The younger generation still craves the personal attention that our older kin enjoyed with their doctor, grocer, shopkeeper, baker, but the world has gotten a bit more impersonal.  We are trying to substitute technology tricks to reestablish that connection, to know our customers so well that we can anticipate their needs and wants.  Interesting.

It really hits home when I think of medicine.  There are now so many doctors and medical researchers and therapies and experimental drugs:  an almost insane amount of new information is flooding the medical practitioners and just separating the woo from the peer reviewed stuff is almost too much for anyone.  Why not use analytics to process a lot of the stuff, to help characterize a disease, to spot potential drug conflicts — not just with respect to the other 8 to 16 meds we take as we age, but with our diet, demographics and genetics?  Why not democratize medicine so that better outcomes are not limited to those who are blessed to live in an area where a few particularly astute doctors specialized in your condition also happen live.

Interestingly, a presentation on this subject by a physician with a big name oncology firm concluded awkwardly:  while the firm was anxious to work with IBM to share their knowledge and wisdom with a democratizing database and delivery system, she was pretty sure that they would still keep enough wisdom to themselves to maintain a high dollar practice!  Made me think that eventually, there would be a medical service system for the masses and another for the elite who have deeper pockets to pay for the best talent.  Oh wait.  Maybe that is the system we have today.  In any case, your next course of medical therapy may be prescribed by (or at least reviewed and validated by) some sort of Doctor Watson AI.

Okay.  So there was a lot of visionary stuff today, especially coming from the corporate customer spokespersons.  Today’s corporate leaders, one person tweeted in real time, are preaching the mantra that used to be the purview of the entrepreneurial evangelists of some silicon valley startups. True enough.

But like most folks my age, I want meat to go with the potatoes.  Vision is one thing.  How do we execute it?

I started to attend breakout sessions today and did some trolling in the Expo hall to find smart IBM folk to get me up to speed on the technology particulars that would transform the vision into reality.  It struck me as cute to see some young IT folk taking selfies of themselves with a zSytem z13 mainframe.

I found some very smart folks on the floor who talked about what their customers were saying they wanted these days and what IBM could provide out of its thick book of products and services to support them.  I managed to get Distinguished Engineer Clod Barrera to do a vblog with me that I hope to put up online soon regarding IBM’s latest Software Defined Storage play, involving their XIV software rebranded as IBM Spectrum Accelerate.  His words built on what I had learned in a good presentation by Rami Elron, Senior Staff Member and Product Technology Director for IBM XIV Storage.

Rami and Clod both steered clear of relating SDS to application performance, which was the saw that VMware used to make SDS a household name a couple of years ago.  Truth be told, most virtualized workload does not demonstrate problematic performance because of chokepoints in storage I/O, but in raw I/O processing above the storage layer.  So, while you may have many compelling reasons to consider SDS, solving what ails application performance is rarely a valid rationale.  We were spared any such correlation in IBM’s presentation.

Big Blue is clearly focused on the Enterprise Data Center and Cloud Service Providers.  They are now offering the means — with Spectrum Accelerator and IBM Hyper-Scale Manager — to deploy manage and control up to 144 clusters of XIV storage, totaling about 46 PB of capacity.  That’s good for big hyper-scale consumers.

I found myself wondering though where all of the “best business models are those that behave like small businesses” stuff that I had heard in the morning keynotes.  Where is the support for the smaller firm, or for the branch offices and remote offices that can’t afford a hyper-scale architecture requiring a minimum of three nodes and a lot of DRAM and SSD cache?  I suspect Clod is right when he asserts that over time there will be reference models advanced for various use cases covering both hyper-scale and hypo-scale implementations.  The good news is that the storage repository created with the Spectrum Accelerator technology can be used to host data from different hypervisors and from applications that aren’t virtualized at all.  And, a lot of kit is supported, so hardware dependencies are minimal.

One more thing:  I liked the attention to disaster recovery rather than a myopic focus on high availability.  From what I could glean, the Spectrum Accelerator technology delivers both, but the DR story is often left out of other SDS vendor pitches or treated like some sort of obsolete functionality.  To hear Clod and Rami tell the tale, the wedding of snapshot technology with asynchronous replication does a lot to solve the challenges of data replication over distance for DR and enables users to establish meaningful recovery point objectives (RPOs).  That is a story worth pursuing.

More to come.  This day isn’t over yet.  IBM Interconnect is shaping up to be a pretty interesting and informative event.

{ 0 comments }

z13 Momentum Building

February 22, 2015

I can’t remember the last time that the mainframe received the kind of attention in “mainstream” tech publications that it has in recent weeks.  Following a brilliant launch event for IBM’s new z System offering in mid-January, Big Blue has taken the show on the road throughout the world.  Able evangelists include IBM VP Kathryn […]

Read the full article →

Tape in 2015: Still a Comer

January 27, 2015

I recently read a blog at Storage Switzerland that got me a little aggravated.  Even more aggravating is the fact that my recent switchover from BlackBerry to a Samsung Galaxy Note 4 “phablet” left behind my link to the irritating piece.  I have been all over George’s site, he is the proprietor of Storage Switzerland, […]

Read the full article →

Again with the Data Lifecycle Management

January 20, 2015

Okay, so I am being swept up a bit by the pro-French meme that has found its way into social media of late.  I prepared a slide deck a while back called Les Data Miserables that my sponsor asked me not to use during my last trip to Europe, so I have some art that is available for […]

Read the full article →

z13 Virtual Machine Hosting: Still a Bit Cloudy

January 20, 2015

Okay, so I have been doing something of an excited jig around the new IBM System z mainframe, the z13.  I am impressed by how much performance, capacity and durability the IBMers have engineered into the rig.  And I am impressed by the economics we have been told to expect from the box.  Imagine standing […]

Read the full article →

Still Loving Tape in 2015

January 17, 2015

Regular readers of this blog know probably too well my long standing affection for tape technology.  I know.  I know.  Cloud (mostly disk-based) storage is getting pretty cheap as the price wars between the industrial farmers in the cloud — Google, AWS and Microsoft — continue to exert downward pressure.  That’s all good, until it […]

Read the full article →

Live. From The Jazz at Lincoln Center in New York. It’s the z13 Mainframe!

January 17, 2015

I shot a bit of video of the internals of the z13 Mainframe when I attended the launch party that IBM convened at the Jazz at Lincoln Center in New York City.  Here is what can best be called a few beauty passes for all the gear heads out there. Note:  I couldn’t get anyone […]

Read the full article →