My Contribution to IBM Edge 2015

by Administrator on May 26, 2015

MTI2MTQwODYzNzU4NzY3Mzc4First off, and for the record, I was at IBM Edge 2015 for the entire week as a guest of IBM.  I doubt that they sprung for my room and board because of my pleasant personality, or for that matter in exchange for my skills as a fat-fingering live Tweeter.  Those things they could get from any number of folks.

Rather, I try to sweeten the pot by offering to sing for my supper — that is, to present at the IBM Technical University as a guest speaker.  At this Edge, as at the three prior iterations of the event, it was my pleasure to contribute my two centavos in exchange for a nice room in one of Sheldon Adelson’s nice hotels.  Maurice “Mo” McCullough made sure that I had five afternoon slots on the itinerary, while Mary Hall and Sarah Katsenmaier made sure that I lost no weight despite the long walks between my hotel room and the Venetian Convention Center venue.  Special thanks to these folks for helping to make the trip so successful.

One other person who really deserves mention is Lizbeth Ramirez Letechipia who lined me up with IBM executives and business partners so I could do more video interviews.  We are in the process of cutting up the videos now and should have them ready to post as early as next week provided IBM’s reviewers are able to give the nod to my clips.

Back to the event.

I was thoroughly impressed with the general sessions on days one and two, though the day 2 general session, which featured more customers than IBMers, really went over well for me.  In particular, I liked the demonstration of evolving genomics tools that apply Big Data analytics to the problem of finding the right combination of meds for a person with a given genetic and behavioral profile.  Way cool.

Walmart also impressed in the first day session, stressing the central role of mainframes in these days of mobile commerce and high transaction volumes.  They underscored the point that IBM was making throughout the conference regarding the Starburst Effect created by mobile user transactions:  one transaction generates between 10 and 100 back end processes, and generates a lot of data.  According to Walmart, the z13 mainframe is just what the doctor ordered to coordinate all of these processes with appropriate levels of uptime and throughput.

fireworksThis isn’t the first time I have heard the story of the Starburst Effect, but WalMart did a good job of illustrating it with practical detail that left just about every IBM customer in the hall nodding their heads or glancing nervously around to see whether their bosses might be hearing that that “dinosaur” technology (mainframes) they had recommended to kick to the curb a few years ago is suddenly the darling of the hybrid data center and a critical element in any sort of viable m-commerce strategy.

Bottom line:  this IBM Edge conference had good signal to noise and provided a content rich environment for those seeking to plumb the technology products and expertise of Big Blue as they brave the world of x86 virtualization, data lakes, software-defined storage, and the mainframe.  The only disappointing thing to me was the lack of mention of tape technology in the two general sessions, despite the collective productization of Linear Tape File System, GPFS file system, and IBM tape automation technology in a bucket called Spectrum Archive.

Re the whole Spectrum re-branding effort, I am not sure that I really like it…not that my opinion counts for anything much.  I first heard references to Spectrum when I interviewed an IBM engineer on this blog, who was mentioning different implementation options for XIV software that IBM had come up with.  You could deploy it simply as software defined storage (SDS) software (which is what it was before IBM joined it to a proprietary controller to create the XIV array), or in the future as a component of the IBM SAN Volume Controller, or you could buy it pre-installed on an XIV array.  Each of these alternatives, I was told somewhat jokingly, would have its own Spectrum product name.

Okay, I get the need or desire for branding consistency, but by the 400th time I heard someone stumbling over which Spectrum [insert product name here] category a certain IBM product fit, I was over it.

The smartest thing I heard at the show was in passing at the end of a TechU session.  A woman who works on Systems Managed Storage stated that IBM invented software-defined storage because, way back when, all of the value add software for storage was hosted on the mainframe and all of the direct-attached storage devices were simply cabled via bus and tag, ESCON or FICON to the backplane of the box.  Truer words were never spoken.

Here are my decks from IBM Edge Technical University in PDF form in case anyone wants to review them.  Just click on the deck and it should download to your browser directly.  I hope everyone will register for next year’s show, which takes place in the same venue — but in October.  Watch this space for more details.











Interview with DataCore’s Ziya Aral Part 1

by Administrator on May 26, 2015

dataavailabilityIn case you haven’t seen the notices or been poked in Twitter, Linked In or email, I am doing a webinar with DataCore Software on 3 June at 2PM EDT on the subject of Data Availability.  Registration is HERE if you have a mind to attend.

This is the third of four of these webinars that I agreed to do with my old friends in Ft. Lauderdale, FL, as they seek to, once again, explain the business value case for their storage virtualization (aka Software Defined Storage) technology, which I happen to use in my own business, SANsymphony-V.  I say “once again” because I remember clearly how DataCore had to go on defense in the late 1990s against the storage hardware vendors who did not want their value add software moved into a server software layer.  Now, the company is finding itself fighting another battle with the SDS folks in the hypervisor community who want to segregate storage virtualization products away from software-defined storage products.  The only reason I can see for making such a bogus distinction is that VMware is 87% owned by EMC.

Frankly, I don’t understand why SDS could not include everything from mainframe System Managed Storage (SMS was the grand father of all SDS, IMHO) to Caringo SWARM’s object storage (see preceding posts).  I am absolutely flabbergasted by the decision of the software-defined storage crowd to exclude capacity management from the services that they abstract away from array controllers and place in a software layer in the server.  Why draw an artificial line there?  Just to exclude Spectrum Virtualize (the old SAN Volume Controller + XIV software from IBM) and DataCore SANsymphony from the list?

With this little bit of bile churning away in my gut, I decided to sit down with Ziya Aral, Chairman of the Board and Co-Founder of DataCore Software, to get his perspective on the software-defined storage phenom and the future of storage generally.  Here is that interview, which I heartily recommend to anyone who is thinking about jumping on the SDS/Hyper-Converged bandwagon…

In this first clip, Ziya and I discuss the evolution of storage and the appearance of “software-defined storage”…

Furthering out analysis of current storage trends, Ziya talks about the natural tendency to migrate storage into software…

Aral goes on to discuss what he sees as the drivers of virtual SAN technology…

“Going back to direct-attached storage is insane,” says Aral.  We explored this provocative, but nuanced, view in this clip…

Ziya continues to discuss the need for virtualization up and down the application stack for storage to become a resource that can be readily provisioned to workload…

The rest of this interview will be provided in the next post.  If you want to learn more about software-defined storage and hyper-converged infrastructure, especially what is required to ensure data itself is as highly available as server components, I hope you will tune in for the talk I will give on the subject on 3 June at 2PM EDT.  The subject of Data Availability and registration is HERE if you can spare the time to attend.

Special thanks to DataCore Software and to Ziya Aral for giving me this interview.


The Object of Objects Part 2

by Administrator on May 26, 2015

Application efficiencyThe name of the game these days is efficiency.  IT is supposed to deploy resources and infrastructure efficiently, respond to changing workload demands with elastic efficiency, provision data protection and archive services efficiently, utilize storage and other resources efficiently…you get it.  There is nothing new about this mantra, but it doesn’t mean we ever paid it much heed.

I am often called an iconoclast for stating the obvious:  we place mostly unmanaged data on mostly unmanaged infrastructure — and that is a disaster waiting to happen.  We need better infrastructure management (the kind that is application facing) and we probably need to start considering the replacement of the file system as data proliferates and hierarchical structures begin to hurt efficient data placement, access and protection more than they help.  So, this post continues my education from Caringo CEO Jonathan Ring regarding the wonderful world of objects.

As I mentioned in the previous post, Ring gave me an on-camera chalk talk when I was last out at Caringo HQ in Austin, TX.  Here is the fourth segment, which covers how to transition from file systems to object storage — and even keep the file system-based access methods that your users probably know and find comfortable and convenient to use…


Ring goes on to demystify how you can implement Caringo object storage (SWARM technology) to unify both your data and the processes by which the data is used…


Replication and erasure coding are two data protection techniques supported by Caringo SWARM.  Ring explains how these services can be applied to data to ensure that data assets are preserved and available when needed.

Finally, Caringo CEO Jonathan Ring concludes that Caringo SWARM is, in the truest sense, software-defined storage.  We see his point.  Truth be told, most file systems are fairly aged constructs that were implemented at a time and in the face of certain engineering realities that made them the best fit for data storage at the time.  The question is whether the flattening of file systems that we have seen in the web world, and the preferred methodology for retrieving and using data in big data analytics, and a host of other factors may be setting the stage for the replacement of the file system altogether.


Again, my thanks to Caringo for giving me the opportunity to learn more about Caringo SWARM and object storage generally.  Look for a more in depth presentation at next month and also in a few articles that I have been asked to do for for next week.


The Object of Object Storage Part 1

by Administrator on May 26, 2015

proactive_systems_monitoringLike many folks, I suppose, I have been equally enamored and repelled by the advent of object storage.  It seems like a great way to begin sorting out the storage junk drawer — storing data as objects seems much more efficient than storing data as files in a hierarchical file system structure.  However, every time it seems like file systems will be running out of juice and object is waiting in the wings to swoop in and take its rightful place on the perch, the file system folks innovate and scale their capacity or fix what ails them.  So, we put off getting serious about objects for another day.

However, this past New Year’s, I resolved to get to know the object space a bit better so I could understand what the wide-eyed advocates were saying and so I could better understand the fit for the technology in the future.  This led me to Austin, TX a month or so back to get a bit of education from Jonathan Ring, an undisputed Jedi Master of object storage and CEO of Caringo.  I will be writing a big article about this visit and what I have learned about object storage generally as we prep another issue of covering data and infrastructure management.  Until that issue is launched.  Here is an explanation of object storage presented by Jonathan Ring using nothing a whiteboard.  If you watch it, you may just realized that there is a lot less mystery and complexity to object storage than you might have thought.  Plus, innovations like Darkive might just make sense for archiving in a flat storage infrastructure.

Jonathan kicks things off with a discussion of the core components of Caringo SWARM, its object storage technology…

Ring goes on in the next clip to discuss object formats, a topic that I was admittedly confused about but now understand so much more clearly…

A key reason for considering object storage is to more effectively associate data protection services with specific data that need them.  Data protection isn’t one size fits all, regardless of what your storage vendor or virtualization hypervisor vendor might suggest.  In this clip, Ring discusses how readily data protection policies can be associated with data objects…


I will follow up shortly with the remaining clips from this educational chalk talk.  I want to thank Jonathan Ring and Mark Goros, their staff, and the folks at JPR Communications, for affording me the opportunity to learn more about object storage and about Caringo SWARM, which I have tracked since the company was launched about a decade ago.  Thanks, Caringo.


Tape Capacity Leaps Ahead…Again!

April 10, 2015

Imagine how amused I was when, in researching the presentation I will be delivering in London this Tuesday on Business Continuity in a Non-Stop World, I ran across a guest blog by a DRaaS vendor dissing tape.  Mr. Ledbetter of Zetta argued that you needed to use a cloud based DRaaS provider because cloud backup […]

Read the full article →

Software-Defined Storage Meets LTFS Tape

April 1, 2015

Continuing my interview from IBM InterConnect 2015 with Clod Barerra, Distinguished Engineer and Chief Technical Strategist for IBM System Storage, our conversation took an unconventional direction.  Given the propensity of SDS to “flatten” storage infrastructure (eliminating tiers of storage by replacing them with simple direct-attached storage nodes), I was curious to learn how Big Blue would bring […]

Read the full article →

Getting Ready for Another IT-SENSE Brown Bag Webinar

April 1, 2015

Mark your calendars for High Noon (Eastern Time) Wednesday 8 April — one week from today — for the next Brown Bag Webinar from IT-SENSE.  This time, we are going to tackle the topic of Avoiding Snake Oil in Software-Defined Storage.  My guest will be Anatoly Vilchinsky of StarWind Software and together we will try […]

Read the full article →

Another Perspective on Software-Defined and Hyper-Converged

March 31, 2015

At IBM InterConnect 2015, I had the opportunity to sit down with Clod Barerra, Distinguished Engineer and Chief Technical Strategist for IBM System Storage, for an ad hoc discussion of software-defined storage, hyper-converged infrastructure, and IBM’s evolving strategy for both.  We were supposed to focus on XIV, which IBM has returned to a pure software […]

Read the full article →

Thank you…Thank you…Thank you!

March 31, 2015

Mucho gracias!  Molto grazie! Merci beaucoup!  Danke schoen!  Arigato gozaimasu!  Большое спасибо!  And so forth… To everyone who made today’s webcast, A Hype-Free Guide to Hyper-Converged Infrastructure, a big success.  I am told that attendance was four times what was expected and most everyone stayed to the end.  Sponsor DataCore Software was delighted (they called me a […]

Read the full article →

From InterConnect to Edge 2015

March 31, 2015

Seems like I was just in Las Vegas covering the IBM InterConnect event.  It was a great show and a remarkable cobble of new school and old school types working together to address the needs of organizations in the face of the M-Commerce revolution.  Now that I have had some time to think about everything […]

Read the full article →