21st Century Software at IBM Edge 2015

by Administrator on May 27, 2015

IBM_Edge_Upcoming_ImageOne thing I really enjoyed about the just concluded IBM Edge 2015 conference was that I got to catch up with a lot of folks in the industry and in private and public IT shops who I do not see too often.  I will shortly be posting an interview I conducted with Ed Walsh, an old friend who now helms copy data management software vendor, Catalogic Software.  I was going to shoot a new video of Rebecca Levesque, CEO of 21st Century Software, but I realized that we had some unfinished business (and video awaiting approval) since the last SHARE event.  So we dug out the video, got approval to run with it, and here it is…

21st Century Software first caught my eye when they were traveling around the country telling mainframers that their confidence in their disk-to-disk data mirroring solutions from their preferred storage hardware vendor might not grounded in reality.  Their PowerPoint deck contained at least five screen shots of EMC customers mirroring/replicating the wrong data — or no data at all — with EMC’s SRDF, usually because the business continuity volume had been moved and no one had updated the mirror function to the new coordinates of the data.  Since mirrors are rarely checked, these companies had no way of knowing that they were replicating blank space.  21st Century’s software could detect that condition so it could be resolved.

I liked the idea and liked the company even more over the years as they built more and more functionality into their software for improving the data protection and disaster recovery strategies of mainframe-using companies.  Late last year, they secured me a spot on the SHARE conference agenda to wax philosophical about mainframe DR and to talk about the vulnerabilities in recovery strategies that continue to go unaddressed.  After my talk, I used my time to shoot an interview with Rebecca and her CTO.  Here it is.

 

At Edge, Rebecca updated me on 21st Century Software product portfolio, including both the DR/VFI package that first drew me to the company and that has been steadily improved in terms of functionality, and also about a product called Total Storage 1.6 (formerly ICE-PAK), which I do not know much about…yet.  From what I can glean, Total Storage is providing the data management solution that EMC promised but never delivered with its ILM play a few years ago.  I am looking forward to getting with the 21st Century Software “brainiacs” on this one and will report back.

{ 0 comments }

Speak of the Devil: Caringo Gets a Patent!

by Administrator on May 27, 2015

oiuhjko98uyhji876trdcvbReaders of this blog may recall that I have been getting concerned about how we deal with the flattening of the storage infrastructure (the consolidation of storage tiers to just flash and disk) and still pursue must-have capabilities like data archive.  Face it, without archive, we will not be able to bend the storage cost curve in any sort of meaningful way.

De-duplicated and compress data all you want, but this is just squeezing more junk into the junk drawer that is storage. The only real way to free up storage capacity has always been to migrate less frequently accessed bits over to an archival repository.  Only, now, with Hadoop and Hyper-Converged Infrastructure, I wonder how we will be able to do that.

About a month ago, I visited my friends at Caringo in Austin, TX, where CEO Jonathan Ring provided me with an overview of his technology, including something he called Darkive.  Darkive is essentially archiving data to disk then powering down the disk until the data is needed.  (Actually, Darkive is simply a technology for using the power modes available in enterprise disk today, for reasons ranging from reduced energy consumption to enabling longer term archival retention of data.)  In any case, I just learned from Storage Newsletter that Caringo has finally been granted a patent on DarkiveTM,

Congrats to Caringo.

{ 0 comments }

My Contribution to IBM Edge 2015

by Administrator on May 26, 2015

MTI2MTQwODYzNzU4NzY3Mzc4First off, and for the record, I was at IBM Edge 2015 for the entire week as a guest of IBM.  I doubt that they sprung for my room and board because of my pleasant personality, or for that matter in exchange for my skills as a fat-fingering live Tweeter.  Those things they could get from any number of folks.

Rather, I try to sweeten the pot by offering to sing for my supper — that is, to present at the IBM Technical University as a guest speaker.  At this Edge, as at the three prior iterations of the event, it was my pleasure to contribute my two centavos in exchange for a nice room in one of Sheldon Adelson’s nice hotels.  Maurice “Mo” McCullough made sure that I had five afternoon slots on the itinerary, while Mary Hall and Sarah Katsenmaier made sure that I lost no weight despite the long walks between my hotel room and the Venetian Convention Center venue.  Special thanks to these folks for helping to make the trip so successful.

One other person who really deserves mention is Lizbeth Ramirez Letechipia who lined me up with IBM executives and business partners so I could do more video interviews.  We are in the process of cutting up the videos now and should have them ready to post as early as next week provided IBM’s reviewers are able to give the nod to my clips.

Back to the event.

I was thoroughly impressed with the general sessions on days one and two, though the day 2 general session, which featured more customers than IBMers, really went over well for me.  In particular, I liked the demonstration of evolving genomics tools that apply Big Data analytics to the problem of finding the right combination of meds for a person with a given genetic and behavioral profile.  Way cool.

Walmart also impressed in the first day session, stressing the central role of mainframes in these days of mobile commerce and high transaction volumes.  They underscored the point that IBM was making throughout the conference regarding the Starburst Effect created by mobile user transactions:  one transaction generates between 10 and 100 back end processes, and generates a lot of data.  According to Walmart, the z13 mainframe is just what the doctor ordered to coordinate all of these processes with appropriate levels of uptime and throughput.

fireworksThis isn’t the first time I have heard the story of the Starburst Effect, but WalMart did a good job of illustrating it with practical detail that left just about every IBM customer in the hall nodding their heads or glancing nervously around to see whether their bosses might be hearing that that “dinosaur” technology (mainframes) they had recommended to kick to the curb a few years ago is suddenly the darling of the hybrid data center and a critical element in any sort of viable m-commerce strategy.

Bottom line:  this IBM Edge conference had good signal to noise and provided a content rich environment for those seeking to plumb the technology products and expertise of Big Blue as they brave the world of x86 virtualization, data lakes, software-defined storage, and the mainframe.  The only disappointing thing to me was the lack of mention of tape technology in the two general sessions, despite the collective productization of Linear Tape File System, GPFS file system, and IBM tape automation technology in a bucket called Spectrum Archive.

Re the whole Spectrum re-branding effort, I am not sure that I really like it…not that my opinion counts for anything much.  I first heard references to Spectrum when I interviewed an IBM engineer on this blog, who was mentioning different implementation options for XIV software that IBM had come up with.  You could deploy it simply as software defined storage (SDS) software (which is what it was before IBM joined it to a proprietary controller to create the XIV array), or in the future as a component of the IBM SAN Volume Controller, or you could buy it pre-installed on an XIV array.  Each of these alternatives, I was told somewhat jokingly, would have its own Spectrum product name.

Okay, I get the need or desire for branding consistency, but by the 400th time I heard someone stumbling over which Spectrum [insert product name here] category a certain IBM product fit, I was over it.

The smartest thing I heard at the show was in passing at the end of a TechU session.  A woman who works on Systems Managed Storage stated that IBM invented software-defined storage because, way back when, all of the value add software for storage was hosted on the mainframe and all of the direct-attached storage devices were simply cabled via bus and tag, ESCON or FICON to the backplane of the box.  Truer words were never spoken.

Here are my decks from IBM Edge Technical University in PDF form in case anyone wants to review them.  Just click on the deck and it should download to your browser directly.  I hope everyone will register for next year’s show, which takes place in the same venue — but in October.  Watch this space for more details.

IBM50Shades

IBMarchive

DRIBM1

DRIBM2

IBMDRP3

 

 

 

 

{ 0 comments }

Interview with DataCore’s Ziya Aral Part 1

by Administrator on May 26, 2015

dataavailabilityIn case you haven’t seen the notices or been poked in Twitter, Linked In or email, I am doing a webinar with DataCore Software on 3 June at 2PM EDT on the subject of Data Availability.  Registration is HERE if you have a mind to attend.

This is the third of four of these webinars that I agreed to do with my old friends in Ft. Lauderdale, FL, as they seek to, once again, explain the business value case for their storage virtualization (aka Software Defined Storage) technology, which I happen to use in my own business, SANsymphony-V.  I say “once again” because I remember clearly how DataCore had to go on defense in the late 1990s against the storage hardware vendors who did not want their value add software moved into a server software layer.  Now, the company is finding itself fighting another battle with the SDS folks in the hypervisor community who want to segregate storage virtualization products away from software-defined storage products.  The only reason I can see for making such a bogus distinction is that VMware is 87% owned by EMC.

Frankly, I don’t understand why SDS could not include everything from mainframe System Managed Storage (SMS was the grand father of all SDS, IMHO) to Caringo SWARM’s object storage (see preceding posts).  I am absolutely flabbergasted by the decision of the software-defined storage crowd to exclude capacity management from the services that they abstract away from array controllers and place in a software layer in the server.  Why draw an artificial line there?  Just to exclude Spectrum Virtualize (the old SAN Volume Controller + XIV software from IBM) and DataCore SANsymphony from the list?

With this little bit of bile churning away in my gut, I decided to sit down with Ziya Aral, Chairman of the Board and Co-Founder of DataCore Software, to get his perspective on the software-defined storage phenom and the future of storage generally.  Here is that interview, which I heartily recommend to anyone who is thinking about jumping on the SDS/Hyper-Converged bandwagon…

In this first clip, Ziya and I discuss the evolution of storage and the appearance of “software-defined storage”…

Furthering out analysis of current storage trends, Ziya talks about the natural tendency to migrate storage into software…

Aral goes on to discuss what he sees as the drivers of virtual SAN technology…

“Going back to direct-attached storage is insane,” says Aral.  We explored this provocative, but nuanced, view in this clip…

Ziya continues to discuss the need for virtualization up and down the application stack for storage to become a resource that can be readily provisioned to workload…

The rest of this interview will be provided in the next post.  If you want to learn more about software-defined storage and hyper-converged infrastructure, especially what is required to ensure data itself is as highly available as server components, I hope you will tune in for the talk I will give on the subject on 3 June at 2PM EDT.  The subject of Data Availability and registration is HERE if you can spare the time to attend.

Special thanks to DataCore Software and to Ziya Aral for giving me this interview.

{ 0 comments }

The Object of Objects Part 2

May 26, 2015

The name of the game these days is efficiency.  IT is supposed to deploy resources and infrastructure efficiently, respond to changing workload demands with elastic efficiency, provision data protection and archive services efficiently, utilize storage and other resources efficiently…you get it.  There is nothing new about this mantra, but it doesn’t mean we ever paid […]

Read the full article →

The Object of Object Storage Part 1

May 26, 2015

Like many folks, I suppose, I have been equally enamored and repelled by the advent of object storage.  It seems like a great way to begin sorting out the storage junk drawer — storing data as objects seems much more efficient than storing data as files in a hierarchical file system structure.  However, every time it […]

Read the full article →

Tape Capacity Leaps Ahead…Again!

April 10, 2015

Imagine how amused I was when, in researching the presentation I will be delivering in London this Tuesday on Business Continuity in a Non-Stop World, I ran across a guest blog by a DRaaS vendor dissing tape.  Mr. Ledbetter of Zetta argued that you needed to use a cloud based DRaaS provider because cloud backup […]

Read the full article →

Software-Defined Storage Meets LTFS Tape

April 1, 2015

Continuing my interview from IBM InterConnect 2015 with Clod Barerra, Distinguished Engineer and Chief Technical Strategist for IBM System Storage, our conversation took an unconventional direction.  Given the propensity of SDS to “flatten” storage infrastructure (eliminating tiers of storage by replacing them with simple direct-attached storage nodes), I was curious to learn how Big Blue would bring […]

Read the full article →

Getting Ready for Another IT-SENSE Brown Bag Webinar

April 1, 2015

Mark your calendars for High Noon (Eastern Time) Wednesday 8 April — one week from today — for the next Brown Bag Webinar from IT-SENSE.  This time, we are going to tackle the topic of Avoiding Snake Oil in Software-Defined Storage.  My guest will be Anatoly Vilchinsky of StarWind Software and together we will try […]

Read the full article →

Another Perspective on Software-Defined and Hyper-Converged

March 31, 2015

At IBM InterConnect 2015, I had the opportunity to sit down with Clod Barerra, Distinguished Engineer and Chief Technical Strategist for IBM System Storage, for an ad hoc discussion of software-defined storage, hyper-converged infrastructure, and IBM’s evolving strategy for both.  We were supposed to focus on XIV, which IBM has returned to a pure software […]

Read the full article →