Again with the Data Lifecycle Management

by Administrator on January 20, 2015

lesdatamiserablesOkay, so I am being swept up a bit by the pro-French meme that has found its way into social media of late.  I prepared a slide deck a while back called Les Data Miserables that my sponsor asked me not to use during my last trip to Europe, so I have some art that is available for use on this blog.

I have chosen this masterpiece because it goes to the heart of my biggest problem with most IT discussions today:  the failure to consider (or in any way to consider or resolve) the two biggest issues we face in contemporary computing — lack of infrastructure management and lack of data management.  Simply put, until we stop putting mostly unmanaged data on mostly unmanaged infrastructure, IT is going to cost too much.  All the virtualization, all the software-defined, all the clouds in the world aren’t going to change that fact.

So, in my humble opinion, every hardware pitchman who tries to sell me on his or her latest rig, and every software pitchman who tries to sell me on his or her latest code, is behind the eight ball from the outset, unless he or she refers to some meaningful contribution his or her technology is going to make to helping me to manage the hardware layer of my shop and helping me to herd the data cats that users insist on pumping into every nook and cranny of my storage.

That preamble is necessary for readers to understand a rather serious issue that prevents me from jumping all over the value case for IBM’s new z13 mainframe.  They opened the door in their analyst pre-brief to a discussion of how they planned to wrangle all the data that would be processed through their rig — they even called the slide Data Lifecycle Management — but they left me hanging when it came to a real definition of the term or and operational description about how it would work.  Here’s the slide…

ibmdlm

As you can see, they use the words data lifecycle management.  However, precious little information was provided to explain this diagram.  Noticeably absent is any of the stuff I used to see on data management slides from IBM in the earlier part of the Aughties:  where is the reference to archives, backups?  What about the four components of a “real” lifecycle (a slide I saw IBM use many times when EMC was trying to peddle information lifecycle management woo in the late 1990s):  (1) a data classification scheme, (2) a storage classification scheme, (3) a policy engine and (4) a data mover.  Without these four ingredients, IBM would rant, you didn’t have a data lifecycle management solution.  (EMC was only selling a data mover.)

Are we changing the terminology?  Is this a special use case just for Big Data?  Or just for the data that is used to perform analytics?  Consider this another bleg:  we need more information on this critical component of IBM’s z13 story.  I just want to understand what you are doing with the data.

Here’s the problem.  Most of the x86 Big Data Analytics platforms use clustered servers and storage to stand up “all data all the time.”  This data is never backed up, only replicated to more and more nodes so that if one node fails you have a copy.  After nodal failure, install new node hardware and copy the data again.  The whole thing strikes me as wasteful and expensive. It is a strategy invented by a child who wants what he wants when he wants it without any consideration given to longer term cost.  Move that wasteful x86 clustering model to a z13 and I should think you will be talking about big money.  The only data protection/mainframe protection strategy I heard from the stage at the launch was a cluster-with-failover solution that just sells more mainframes.  Is that really what we are going for with respect to the data protection part of your Big Data story?

What about data preservation?  You know, what we used to call archive.  It seems to me that your latest innovations with respect to tape technology, including the demonstrations of 140 and 180TB LTO tape cartridges announced around IBM Edge last May, would make you hot to go with tape-based archive.  Instead, your data lifecycle management discussion was noticeably devoid of all mentions of archive.  I recently speculated that architectures like Hadoop might force us to look for an “archive in place” strategy that did not require data replication or data movement to an archive repository at all.  Alternatively, IBM’s interest in “FLAPE” (flash plus tape) sounded like some smart people were considering how to cobble together a storage paradigm that would solve the archive and the production data requirement in an elegant way.  But, again, no mention of flape, tape or archive.

Frankly, I am dismayed and confused on this one.  How do you call what you represent on the slide data lifecycle management at all?  Formal “bleg” — please explain it to me because I am a little dense.  I will reprint your explanation here.

{ 0 comments }

z13 Virtual Machine Hosting: Still a Bit Cloudy

by Administrator on January 20, 2015

mainframes-are-clouds3smOkay, so I have been doing something of an excited jig around the new IBM System z mainframe, the z13.  I am impressed by how much performance, capacity and durability the IBMers have engineered into the rig.  And I am impressed by the economics we have been told to expect from the box.  Imagine standing up 8000 virtual machines in this platform with complete insulation:  one app crashes, the whole stack doesn’t come tumbling down.  (Try claiming that with VMware or Hyper-V!)

If IBM’s claims hold water, this could actually make the whole cloud thing viable…or at least, less non-viable.

Still, I have a few concerns before I can fully endorse the rig.  Let me put them in the form of a “bleg” (a blog-based beg for information).  Without the following info, it will be difficult for me to assess fully the claims proffered by IBM at their recent launch party.

First, with respect to virtualization and hosting, your speaker on stage at the launch noted that z13 could stand up 8000 VMs using KVM, MVS or zOS.  Please clarify the differences in these methods for “standing up” virtual machines, not only for me but for the great unwashed that knows only what VMware tells them about hypervisor operations and virtualization.

Next, the 8000 VM claim seems to be tied to Linux workloads.  How do you want to handle Microsoft or VMware workloads?  Do we need to migrate these VMs into KVM?  Are you going to host these “alien’ VMs on a blade server a la zEnterprise slideware of a few years ago?

Next, the real cost of IT infrastructure is storage.  Not servers.  Not processors.  Not memory.  Not even networks.  Storage.  Spinning rust, mixed with a bit of mylar and flash.  There is zero mention of the storage component of your z13 architecture that I can find, other than a faster FICON fabric connection.  So, clearly, to assess the economics of your cloud computing story, we need info on what and how with respect to storage.

Last year, I attended the Edge conference in Las Vegas.  (Hope to again this year.)  Your storage big brains were talking about several different storage solutions, from traditional DASD (but maybe with some “smart” XIV arrays or even some flash rigs), to virtualized storage using SAN Volume Controller (maybe with some XIV software functionality added), to a sort of tossed salad of technologies supervised by an uber management console.  Are any of these kits finding their way into your diagrams for z13 VM hosting environments or clouds?  When will we get the information on this?

Bottom line:  I want to believe.  I want to be the smartass in the room during the EMC…er VMware…sales pitch who counters their nonsensical narrative about fault tolerant, highly affordable and available virtual SAN clusters behind VMware vSphere hardware/software stacks with a pithy assertion that they are trying to do with x86 tinkertoys what Big Blue has already done with Big Iron.  I want to say, IBM has had 30+ years to work the bugs out of multi-tenant hosting and workload virtualization and that z13 is the penultimate expression of all that you have accomplished.  I want to watch smoke billow out of Emperor Vimware’s ears and to watch Evil Machine Corp’s Disk Vader hang his head in shame.

Can you help me out here.  This slide looks pretty but lacks any meat to go with the potatoes.

z13economics

{ 0 comments }

Still Loving Tape in 2015

by Administrator on January 17, 2015

googletapeRegular readers of this blog know probably too well my long standing affection for tape technology.  I know.  I know.  Cloud (mostly disk-based) storage is getting pretty cheap as the price wars between the industrial farmers in the cloud — Google, AWS and Microsoft — continue to exert downward pressure.  That’s all good, until it goes pear-shaped for one of the aforementioned providers.  When profit-free quarters fail to impress shareholders, there will be an adjustment.  And all of the little guys that source their cloudy storage from the big vendors will hit a wall.

But enough doom and gloom.  I like where tape is going.  Huge capacity, good speeds and feeds, predictable and consistent operation and jitter free streaming, low cost per GB, low impact on the environment, the whole enchilada.  I also like how AWS and Google have admitted that they are themselves using tape in the back end.  Heck Google has conceded publicly that its email service would have been non-recoverable in an outage that occurred last year had it not been for good old tape.

The coatings have improved, with Barium Ferrite providing a springboard for huge capacity improvements in the same form factor cartridge and tape length.  Robotics are getting better and cheaper.  The software for media lifecycle management has improved significantly.  This is a technology that is firing on all thrusters.

The only problem is what Spectra Logic’s Matt Star likes to call the Carbon Robot (the operator, that is).  One thing a tape can’t handle is being dropped to a hard floor or surface.  When that happens, all bets are off when it comes to durability, reliability, performance and capacity.  That’s why, if you drop a tape, you owe it to your company and to the technology to set the “shocked” tape to one side.  Don’t use it.  You are asking for trouble.

Seems kind of sad that, for all the high tech innovation that has been invested in the media and drives, a stupid thing like a shocked tape can screw the pooch.  Sort of Apollo 13-ish, for those old enough to remember the story or to have seen the Ron Howard film.apollo13  A simple procedure for stirring oxygen tanks causes one to explode and puts the crew in jeopardy.

Anyway, for a technology with so much going for it, such a simple thing like dropping the media accidentally, then using it anyway will shortly put up to 180 TB at risk — and that is without compression.  That is pretty bad.

So, what is required to idiot proof the media cartridge?  What will it take to let the admin or operator know when a cartridge has been dropped or otherwise shocked and should not be used.

I had the pleasure of talking about this issue with Wayne Tolliver, formerly of Spectra Logic and now CTO of his own firm, ShockSense Enterprises LLC, while I was doing standup comedy at Storage Decisions in November.  I learned about his fix for the problem and committed this interview to tape.  Here it is for you to digest.

ShockSense is a great idea:  simple in its conception and implementation, brilliant in its ease of use and non-disruptive integration into tape operations.  I love this freaking idea and I am jealous of Wayne for dreaming it up.  On the other hand, I wish him every success both in finding investors and in garnering the support of the tape industry that would be asinine if they did not leverage ShockSense technology to fix tape’s one last vulnerability.

I will be happy to pass along Wayne’s contact information to anyone who wants to explore the possibilities.  Contact me.

Special thanks to Wayne Tolliver, Chief Technology Officer for ShockSense Enterprises LLC, for his creativity and humility.  You are a credit to the American spirit of invention and innovation…as well as being a pretty great guy.

{ 0 comments }

I shot a bit of video of the internals of the z13 Mainframe when I attended the launch party that IBM convened at the Jazz at Lincoln Center in New York City.  Here is what can best be called a few beauty passes for all the gear heads out there.

Note:  I couldn’t get anyone from IBM to tell me whether the kit is tricked out with blue neon lights like EMC storage gear.  If so, it will make a very attractive display at corporate cocktail parties.

Hope you enjoy it.  More information to come.

{ 0 comments }

A New Year and a New Enterprise Compute Platform: Welcome z13

January 14, 2015

 It is perhaps appropriate that my first post of 2015 covers the first truly exciting technology news of the year:  the introduction today of the latest IBM System z Mainframe, the z13. I am in New York City today and will shortly head over to the Lincoln Center to attend the formal launch event.  Expect […]

Read the full article →

Happy Thanksgiving to Our Loyal Readers

November 25, 2014

We have been busy with travel and development, working on a re-boot of the Data Management Institute to coincide with a free on-line video-based training program called Storage Fundamentals which we hope to complete by end of January.  Several sponsors have joined in and we are looking for additional support from the disk array crowd […]

Read the full article →

My Bad…

October 14, 2014

I inadvertently said that our next IT-SENSE webinar was tomorrow, Wednesday, at noon.  It is in fact on Thursday at noon EST.  I am overly energized, I guess, and anxious to get going with this discussion of the vulnerabilities that are being encouraged by agile mainframe data center hype.  Please join us on THURSDAY at […]

Read the full article →

High Noon on Agile Mainframe Data Centers

October 14, 2014

The agile mainframe data center idea is being promoted by everyone from CA Technologies to IBM to the many hardware and software peddlers that crowd the increasing number of mainframe oriented events happening around the world these days.  (Yes, I said increasing number, in recognition that mainframes are once again mainstream for a number of […]

Read the full article →

Agile: The Unicorn of Contemporary IT

October 14, 2014

There is nothing wrong with the idea of agile computing.  In fact, I would argue that it reflects a set of values and goals that have defined my role in IT over the years. Who doesn’t want an IT service that responds quickly to business needs, one that can turn on a dime as the […]

Read the full article →

Miss Me?

September 3, 2014

August was a busy month and just whipped by.  I did several webcasts for 1105 Media/Virtualization Review/Redmond Magazine.  Wrote a bunch of articles and columns for the trade press and took a staycation that found me mending and painting and building — basically catching up on the honey-do list. Now that September has arrived, I […]

Read the full article →