xiv-storageIf you are like me, you are still trying to get your head around the whole software-defined storage thing.  Some folks insist that it is an entirely new architecture, but I remember running into one of the folks who manages IBM’s System Managed Storage at their IBM Edge show last year who shook her head wearily and noted that IBM SMS was doing what SDS is supposed to do all the way back in 1993.  Once more, everything old is new again.

At IBM Interconnect 2016, I had the wonderful opportunity to chat with IBMers about their go-to-market strategy for software-defined storage.  Specifically, I met with Diane Benjuya, Marketing Manager for Spectrum Accelerate and Doug Petteway, Senior Offering Manager, Spectrum Accelerate, regarding their forthcoming products.  Anyone familiar with IBM wares will recognize the root technology, XIV, which IBM purchased from its Israeli developers a few years ago and fielded as a grid storage array.  Over the years, it has become quite popular, according to both Benjuya and Petteway in the video interview below — so much so, they say, XIV software is being ported to an x86 server to provide the basis for a Hyper-Converged Infrastructure appliance (hypervisor+server+storage+XIV).

Here is the interview.

 


While I thank Diane and Doug for their time and enthusiasm (despite tremendous jet lag and the last minute invite), I was sorry that the interview left me with more questions than we had time to address properly.  I wanted to understand how IBM arrived at its definition of that amorphous term, software-defined storage.  Like many other products in this space, they seemed to be using the term to describe a server-side stack of storage value-add services customarily found on an enterprise array controller.  Nothing more.  I had hoped for a more robust model, especially since IBM has some storage virtualization technology in SAN Volume Controller that might help to differentiate IBM’s SDS stack from most of its competitors.  VMware excluded storage virtualization from its definition of SDS and the whole industry seems to have been unwilling to deviate from that definition.  However, being able to carve volumes of storage from a virtual pool — which you can do with a product like SVC from IBM or SANsymphony-V from DataCore — would seem to me to be very much in line with the true spirit of SDS.

Clod Barera seemed to think so, too, when I interviewed that IBM Distinguished Engineer at an IBM event a couple of years ago.  (The video interview is here.)  Clod was speculating at the time that XIV software might make a really good complement to SVC, turning the whole thing into what we now call an HCI appliance.  But, this time around, the Spectrum Accelerate folks didn’t want to go there.

Also missing was any discussion of the possibility of adding file and/or object system elements to the IBM Spectrum Accelerate stack, which seems to be a potential evolutionary path of SDS going forward, or anything like DataCore’s I/O handling parallelization, which seems to make a lot more sense in speeding up RAW I/O than does throwing a lot of flash at the problem.

It will be very interesting to see the Spectrum Accelerate story unfold over time.  Given the great patent work already done on XIV, and IBM’s extensive IP in storage generally, there is a lot of technology innovation they could throw at SDS without a lot of extra work.

For the record, I was IBM’s guest at IBM Interconnect 2016, where I received payment to live tweet and blog at the show.  These video interviews are, however, my own work.  They have not been edited in any way by Big Blue.

 

{ 1 comment }

Hypercube%202It’s an issue that has been nagging at me for quite awhile.  Who gets to decide what functionality belongs in a software-defined storage stack and what should belong to the array controller or the operating system?  I found my discussion with Chandra Mukhayala, IBM’s Portfolio Marketing Manager for File and Object Storage, on this topic so interesting, I had to get it on video.

Here is the result.  I can’t thank Chandra enough for taking time to share his thoughts with us.  Very insightful commentary.

 

 

This interview was shot at IBM Interconnect 2016, where I was contracted to live tweet sessions and support other social media efforts. IBM did not edit this video in any way. Opinions are those of the interviewees, and hopefully — in this case — of Big Blue as well!

{ 0 comments }

mainframes-are-clouds3smIt isn’t every day that we score an interview with the General Manager of zSystems and LinuxONE, Ross Mauri.  He is a very busy guy working to bridge the constituencies — AppDev and Ops — within the data center, while adapting IBM’s mainframe narrative to the brave new world of hybrid clouds.

So, when you get the chance to pull him into a video interview, you take it.  The extraordinary thing about Mauri is how readily he switches his focus to the topic at hand and frames his ideas as though he has had a lot of time to prepare for questions that were not provided to him in advance.  What also comes through is the personable-ness of the fellow.  You find yourself wanting to hear more of his thoughts whenever he talks.

We have divided Mauri’s interview into five short parts.  Each one contains insights and wit that you may just find useful as you build your own hybrid cloud initiative.  So, enjoy!

Ross begins by addressing the cultural gap between the mainframers and the appdev folks that exists in many data centers.  He notes how IBM Interconnect 2016 is providing a forum for the two communities to interact…

 


The mainframe remains a fixture of the contemporary hybrid cloud datacenter. Mauri lists some of the mainframe attributes that keep Big Iron so relevant.


This year’s Interconnect covered Systems of Insight in greater depth than we have heard in prior adobe acrobat xi standard events. A large ecosystem of vendors and technologies appears to be taking shape. We asked Mauri what the challenges were for IBM in wrangling so many players, technologies and APIs?


Finally, we asked what IBM sees itself as becoming in the hybrid data center? Hardware technology innovator? Essential software and services provider? Trusted advisor? Mauri gives his view.
 

For the record, these interviews were recorded at IBM Interconnect 2016.  I was engaged by IBM to live tweet from some sessions at the show and the company picked up the expenses for my transportation and lodging at the show, as well as the cost of the show ticket.  This interview, the questions I posed, and the edits I made of the answers are completely my own.

Special thanks to Ross Mauri, Mary Hall, and all the gang at IBM’s Social Media Organization for making these interviews possible.

{ 0 comments }

IBM Spectrum Storage Explained…

by Administrator on April 13, 2016

ibm_spectrum_storageI have to admit, I am not very good at remembering brand names.  I am even less good at remembering brand names when the vendor changes them.  A year or two back, IBM decided to regroup and rebrand its storage offerings under the “Spectrum” umbrella.  It made for pretty artwork, but it confused me…and probably many of Big Blue’s customers.

IBMers have a tendency to drop “Spectrum Accelerate” or “Spectrum Protect” or “Spectrum Virtualize” or “Spectrum XYZ” into their dialog from time to time, requiring trade press writers and bloggers to do a Google-quest to find out which products they are talking about.  But, at IBM Interconnect 2016, Eric Herzog, Vice President of Marketing for IBM Storage Systems (and long-time friend, even in pre-Big Blue years), took the time to sit for a video interview and straighten me out, once and for all, on the whole Spectrum thing.  Here is his interview, decked out in his colorful Hawaiian shirt!

 

 

I attended IBM Interconnect 2016 as a guest of IBM.  For the record, they covered my attendee fee, room, board and transportation and also provided a stipend for live tweeting sessions I attended at the show.  This video and its edited content are entirely my own.

Thank you, Eric, for doing the interview. And thanks for clarifying the Spectrum Storage family of products from IBM.

{ 0 comments }

Cloud vs Tape? Says who? Hybrid Clouds Leverage Tape and LTFS

April 13, 2016

To all of the pundits, analysts and vendors of disk and flash who once again claim that tape is dead, killed this time by cloud, I want you all to take a deep breath.  Hold it for a second.  Release it.  Repeat three or four times. Then watch the video interview below with Shawn Brume […]

Read the full article →

Time in a Bottle — Storing Archival Data in DNA

April 13, 2016

Okay.  So we are bringing this up again after about eight or nine years.  Here is the base article behind today’s post.  Apparently, Hollywood is looking into using DNA to store digital movie data.  Cheap, capacious, durable data storage is the lure and at least one start-up is now striving to perfect the technology to […]

Read the full article →

Security in Hybrid Clouds Begins with a Mainframe

April 13, 2016

At IBM Interconnect 2016, I enjoyed picking up a conversation where I last left it with Kathryn Guarini, Vice President of Offering Management for z Systems and LinuxONE at IBM Systems.  We first met when she took me on the grand tour of the z13 platform when it was released last year.  This time, with […]

Read the full article →

Everybody is Talking Management All of a Sudden….

April 12, 2016

I am really happy about a trend I am seeing in start-ups and established players talking about the need for analytics to support the management, availability and delivery of resources like storage.  I plan to write an Infrastruggle column about it at Virtualization Review.  Please stand by. In preparation, you might want to take a […]

Read the full article →

IBM LinuxONE Rockhopper and the Enterprise Linux Ecosystem

April 12, 2016

Okay, so I admit it.  I really like what IBM is doing with Linux.  At a macro level, it is a really smart strategy for refreshing the IBM brand and associating it with the next generation of computer geeks coming through the ranks, while maintaining the strengths of IBM technology and its cadre of dedicated […]

Read the full article →

Mainframes are Coming on Strong in a Hybrid Cloud World

April 11, 2016

I have said many times here and in other articles and columns that there is really very little “new” in the concept of cloud.  I have also quipped to anyone who would listen that mainframes are now and always have been clouds “in a box.”  I didn’t need to attend IBM Interconnect 2016 to have […]

Read the full article →