Tape in 2015: Still a Comer

by Administrator on January 27, 2015

do_not_feed_the_trolls_postcards-rc11657f3a8814c29ba16a55e6792edf4_vgbaq_8byvr_512I recently read a blog at Storage Switzerland that got me a little aggravated.  Even more aggravating is the fact that my recent switchover from BlackBerry to a Samsung Galaxy Note 4 “phablet” left behind my link to the irritating piece.  I have been all over George’s site, he is the proprietor of Storage Switzerland, and can’t find the blog.  I only know that it had something to do with tape and backup and possibly clouds and made some assertions I couldn’t abide.  Oh well, we shall blog war another time, Mr. Crump.  But I still want to offer some thoughts here about tape.

I noted a couple of posts back that Wayne Tolliver’s ShockSense was pretty cool tape technology — for those who didn’t watch the video, Wayne has patented a little sensor that adds on to the bar code label on a tape cartridge and provides a visual indicator when a tape has been dropped or otherwise shocked.  That is cool because, if implemented, it could eliminate the one remaining rational complaint about tape today:  the propensity of users to employ tapes that have been damaged due to improper handling, especially when the user is unaware that the mishandling has occurred.  IMHO, Tolliver is on to something with relevance not only to tape but to any kind of shock-sensitive merchandise — a simple indication of a potential issue so that remediation can be taken.

I was talking to Rich Gadomski at Fujifilm the other day and we agreed that tape made a lot of inroads into the marketplace last year.  Rich is always a wealth of information, and he didn’t disappoint this time.  He alerted me to a document that dropped in December, while I was down with the flu, from the Tape Storage Council.  It summarized the two trends that were fueling tape technology’s current renaissance and future prospects.  I found it an interesting read and wanted you to have the chance to see it here.  The proper approach would be to just link to the Tape Storage Council website and have you go there to download the document.  HERE IS THAT LINK.  (Note that this page would not open for me just now when I went there.  The site may be down for maintenance.)

The other option is to make it easy for you to download and read from my blog.  I hope this is okay with the Tape Storage Council, as I did not ask their permission.

 

2014 Tape Storage Council Memo_FINAL

I  have nits to pick with a reference or two in the document, but it provides a pretty complete summary of tape capabilities and economics that provide the technology with a long runway going forward in smart IT shops.  (I wish they would lose the references to the widely discredited Digital Universe study by IDC.  Truth be told, the growth of new digital data doesn’t drive squat.  It is the failure to manage that data, or to adopt infrastructure that replicates that data an obscene number of times, that drives storage capacity demand.)

But I digress.  The document summarizes some announcements deemed to be milestones by the Council members.  These included:

  •  On Sept. 16, 2013 Oracle Corp announced the StorageTek T10000D enterprise tape drive. Features of the T10000D include an 8.5 TB native capacity and data rate of 252 MB/s native. The T10000D is backward read compatible with all three previous generations of T10000 tape drives.
  • On Jan. 16, 2014 Fujifilm Recording Media USA, Inc. reported it has manufactured over 100 million LTO Ultrium data cartridges since its release of the first generation of LTO in 2000. This equates to over 53 thousand petabytes (53 exabytes) of storage and more than 41 million miles of tape, enough to wrap around the globe 1,653 times.
  • April 30, 2014, Sony Corporation independently developed a soft magnetic under layer with a smooth interface using sputter deposition, created a nano-grained magnetic layer with fine magnetic particles and uniform crystalline orientation. This layer enabled Sony to successfully demonstrate the world’s highest areal recording density for tape storage media of 148 GB/in2. This areal density would make it possible to record more than 185 TB of data per data cartridge.
  • On May 19, 2014 Fujifilm in conjunction with IBM successfully demonstrated a record areal data density of 85.9 Gb/in2 on linear magnetic particulate tape using Fujifilm’s proprietary NANOCUBIC™ and Barium Ferrite (BaFe) particle technologies. This breakthrough in recording density equates to a standard LTO cartridge capable of storing up to 154 terabytes of uncompressed data, making it 62 times greater than today’s current LTO-6 cartridge capacity and projects a long and promising future for tape growth.
  • On Sept. 9, 2014 IBM announced LTFS LE version 2.1.4 4 extending LTFS (Linear Tape File System) tape library support.
  • On Sept. 10, 2014 the LTO Program Technology Provider Companies (TPCs), HP, IBM and Quantum, announced an extended roadmap which now includes LTO generations 9 and 10. The new generation guidelines call for compressed capacities of 62.5 TB for LTO-9 and 120 TB for generation LTO-10 and include compressed transfer rates of up to 1,770 MB/second for LTO-9 and a 2,750 MB/second for LTO-10. Each new generation will include read-and-write backwards compatibility with the prior generation as well as read compatibility with cartridges from two generations prior to protect investments and ease tape conversion and implementation.
  • On Oct. 6, 2014 IBM announced the TS1150 enterprise drive. Features of the TS1150 include a native data rate of up to 360 MB/sec versus the 250 MB/sec native data rate of the predecessor TS1140 and a native cartridge capacity of 10 TB compared to 4 TB on the TS1140. LTFS support was included.
  • On Nov. 6, 2014, HP announced a new release of StoreOpen Automation that delivers a solution for using LTFS in automation environments with Windows OS, available as a free download. This version complements their already existing support for Mac and Linux versions to help simplify integration of tape libraries to archiving solutions.

Reading this list, I found myself recalling the scene from Ghostbusters (which was also re-released in theatres on Labor Day Weekend in 2014) after the team uses their proton packs and believes that they have destroyed Gozer:

Dr Ray Stantz: We’ve neutronized it, you know what that means? A complete particle reversal.
Winston Zeddemore: We have the tools, and we have the talent.
Dr. Peter Venkman: It’s Miller time!

So much for that little insight on my misspent youth.  Bottom line:  tape is looking pretty good these days, which is why it kind of irritates me when (1) tape is conflated with backup and (2) when the statement is made that tape is giving way to clouds.  Here’s the rub.

Tape and backup are two different things.  Just because backups have often been made to tape media doesn’t mean that the problems of backup have much to do with tape technology.  Backup was always a flawed enterprise:  backup software vendors were trying to automate the solution to data protection (data loss or corruption owing to any number of causes) and ran into just about every hurdle ever imagined.  Servers were too busy to run a data replication process, or to serve data up over an I/O port.  Applications, when operating, didn’t allow data to be copied at all.  Lots of servers introduced a need to superstream data to the tape target, and as shorter tape jobs finished, the superstream unraveled, extending the time required to take the backup.  Data could not be submitted to the tape drive at the jitter-free and persistant clip that the drive wanted.  The list goes on.  None of these things had anything to do with tape, but with the backup application and the way it interoperated with the production environment.  Conflating the two makes me mad.

Truth is, today you don’t need backup software.  Oops.  I said it.  With LTFS from IBM, you could just copy the entire file system directly to tape media without specialty backup containers.  With object storage, you could simply write object name and metadata into one track of a tape and the corresponding data to another track — LTFS on steroids.

Anyway, hating on backup has always been leveraged by the disk (and now some of the flash kids) to hate on tape technology, much in the way that VMware blamed hosted application workload performance issues on “legacy disk” — another assertion that fails the smell test.  It should stop and analysts need to stop taking money from disk guys to say that backup problems are due to the inadequacies of tape technology.

Cloud replacing tape is another bogus assertion.  For one thing, the industrial farmers of the cloud world — Amazon, Microsoft and Google — all use tape in the course of operating their storage clouds.  Google was reluctant to admit it for the longest time, but I heard a great presentation by one of their tape mavens at Fujifilm’s conference last September in NYC, and the guys at The Register did a great write-up on it. (HERE)

Moreover, there are now cloud storage services, including dternity, that specialize in using tape technology for cloud storage and archiving.  Officially announced at the NAB show in 2014, here is a video interview shot at that event…

 

 

I am planning to head out to NAB this year and to give a talk around tape.  I look forward to hooking up with the folks at Fujifilm, dternity and perhaps a few others to see what tape mongers will be bragging about this year.  For now, assertions that clouds kill tape are just as stupid as the other tape is dead lines we have heard throughout the years.

Watch this space.

{ 0 comments }

Again with the Data Lifecycle Management

by Administrator on January 20, 2015

lesdatamiserablesOkay, so I am being swept up a bit by the pro-French meme that has found its way into social media of late.  I prepared a slide deck a while back called Les Data Miserables that my sponsor asked me not to use during my last trip to Europe, so I have some art that is available for use on this blog.

I have chosen this masterpiece because it goes to the heart of my biggest problem with most IT discussions today:  the failure to consider (or in any way to consider or resolve) the two biggest issues we face in contemporary computing — lack of infrastructure management and lack of data management.  Simply put, until we stop putting mostly unmanaged data on mostly unmanaged infrastructure, IT is going to cost too much.  All the virtualization, all the software-defined, all the clouds in the world aren’t going to change that fact.

So, in my humble opinion, every hardware pitchman who tries to sell me on his or her latest rig, and every software pitchman who tries to sell me on his or her latest code, is behind the eight ball from the outset, unless he or she refers to some meaningful contribution his or her technology is going to make to helping me to manage the hardware layer of my shop and helping me to herd the data cats that users insist on pumping into every nook and cranny of my storage.

That preamble is necessary for readers to understand a rather serious issue that prevents me from jumping all over the value case for IBM’s new z13 mainframe.  They opened the door in their analyst pre-brief to a discussion of how they planned to wrangle all the data that would be processed through their rig — they even called the slide Data Lifecycle Management — but they left me hanging when it came to a real definition of the term or and operational description about how it would work.  Here’s the slide…

ibmdlm

As you can see, they use the words data lifecycle management.  However, precious little information was provided to explain this diagram.  Noticeably absent is any of the stuff I used to see on data management slides from IBM in the earlier part of the Aughties:  where is the reference to archives, backups?  What about the four components of a “real” lifecycle (a slide I saw IBM use many times when EMC was trying to peddle information lifecycle management woo in the late 1990s):  (1) a data classification scheme, (2) a storage classification scheme, (3) a policy engine and (4) a data mover.  Without these four ingredients, IBM would rant, you didn’t have a data lifecycle management solution.  (EMC was only selling a data mover.)

Are we changing the terminology?  Is this a special use case just for Big Data?  Or just for the data that is used to perform analytics?  Consider this another bleg:  we need more information on this critical component of IBM’s z13 story.  I just want to understand what you are doing with the data.

Here’s the problem.  Most of the x86 Big Data Analytics platforms use clustered servers and storage to stand up “all data all the time.”  This data is never backed up, only replicated to more and more nodes so that if one node fails you have a copy.  After nodal failure, install new node hardware and copy the data again.  The whole thing strikes me as wasteful and expensive. It is a strategy invented by a child who wants what he wants when he wants it without any consideration given to longer term cost.  Move that wasteful x86 clustering model to a z13 and I should think you will be talking about big money.  The only data protection/mainframe protection strategy I heard from the stage at the launch was a cluster-with-failover solution that just sells more mainframes.  Is that really what we are going for with respect to the data protection part of your Big Data story?

What about data preservation?  You know, what we used to call archive.  It seems to me that your latest innovations with respect to tape technology, including the demonstrations of 140 and 180TB LTO tape cartridges announced around IBM Edge last May, would make you hot to go with tape-based archive.  Instead, your data lifecycle management discussion was noticeably devoid of all mentions of archive.  I recently speculated that architectures like Hadoop might force us to look for an “archive in place” strategy that did not require data replication or data movement to an archive repository at all.  Alternatively, IBM’s interest in “FLAPE” (flash plus tape) sounded like some smart people were considering how to cobble together a storage paradigm that would solve the archive and the production data requirement in an elegant way.  But, again, no mention of flape, tape or archive.

Frankly, I am dismayed and confused on this one.  How do you call what you represent on the slide data lifecycle management at all?  Formal “bleg” — please explain it to me because I am a little dense.  I will reprint your explanation here.

{ 0 comments }

z13 Virtual Machine Hosting: Still a Bit Cloudy

by Administrator on January 20, 2015

mainframes-are-clouds3smOkay, so I have been doing something of an excited jig around the new IBM System z mainframe, the z13.  I am impressed by how much performance, capacity and durability the IBMers have engineered into the rig.  And I am impressed by the economics we have been told to expect from the box.  Imagine standing up 8000 virtual machines in this platform with complete insulation:  one app crashes, the whole stack doesn’t come tumbling down.  (Try claiming that with VMware or Hyper-V!)

If IBM’s claims hold water, this could actually make the whole cloud thing viable…or at least, less non-viable.

Still, I have a few concerns before I can fully endorse the rig.  Let me put them in the form of a “bleg” (a blog-based beg for information).  Without the following info, it will be difficult for me to assess fully the claims proffered by IBM at their recent launch party.

First, with respect to virtualization and hosting, your speaker on stage at the launch noted that z13 could stand up 8000 VMs using KVM, MVS or zOS.  Please clarify the differences in these methods for “standing up” virtual machines, not only for me but for the great unwashed that knows only what VMware tells them about hypervisor operations and virtualization.

Next, the 8000 VM claim seems to be tied to Linux workloads.  How do you want to handle Microsoft or VMware workloads?  Do we need to migrate these VMs into KVM?  Are you going to host these “alien’ VMs on a blade server a la zEnterprise slideware of a few years ago?

Next, the real cost of IT infrastructure is storage.  Not servers.  Not processors.  Not memory.  Not even networks.  Storage.  Spinning rust, mixed with a bit of mylar and flash.  There is zero mention of the storage component of your z13 architecture that I can find, other than a faster FICON fabric connection.  So, clearly, to assess the economics of your cloud computing story, we need info on what and how with respect to storage.

Last year, I attended the Edge conference in Las Vegas.  (Hope to again this year.)  Your storage big brains were talking about several different storage solutions, from traditional DASD (but maybe with some “smart” XIV arrays or even some flash rigs), to virtualized storage using SAN Volume Controller (maybe with some XIV software functionality added), to a sort of tossed salad of technologies supervised by an uber management console.  Are any of these kits finding their way into your diagrams for z13 VM hosting environments or clouds?  When will we get the information on this?

Bottom line:  I want to believe.  I want to be the smartass in the room during the EMC…er VMware…sales pitch who counters their nonsensical narrative about fault tolerant, highly affordable and available virtual SAN clusters behind VMware vSphere hardware/software stacks with a pithy assertion that they are trying to do with x86 tinkertoys what Big Blue has already done with Big Iron.  I want to say, IBM has had 30+ years to work the bugs out of multi-tenant hosting and workload virtualization and that z13 is the penultimate expression of all that you have accomplished.  I want to watch smoke billow out of Emperor Vimware’s ears and to watch Evil Machine Corp’s Disk Vader hang his head in shame.

Can you help me out here.  This slide looks pretty but lacks any meat to go with the potatoes.

z13economics

{ 0 comments }

Still Loving Tape in 2015

by Administrator on January 17, 2015

googletapeRegular readers of this blog know probably too well my long standing affection for tape technology.  I know.  I know.  Cloud (mostly disk-based) storage is getting pretty cheap as the price wars between the industrial farmers in the cloud — Google, AWS and Microsoft — continue to exert downward pressure.  That’s all good, until it goes pear-shaped for one of the aforementioned providers.  When profit-free quarters fail to impress shareholders, there will be an adjustment.  And all of the little guys that source their cloudy storage from the big vendors will hit a wall.

But enough doom and gloom.  I like where tape is going.  Huge capacity, good speeds and feeds, predictable and consistent operation and jitter free streaming, low cost per GB, low impact on the environment, the whole enchilada.  I also like how AWS and Google have admitted that they are themselves using tape in the back end.  Heck Google has conceded publicly that its email service would have been non-recoverable in an outage that occurred last year had it not been for good old tape.

The coatings have improved, with Barium Ferrite providing a springboard for huge capacity improvements in the same form factor cartridge and tape length.  Robotics are getting better and cheaper.  The software for media lifecycle management has improved significantly.  This is a technology that is firing on all thrusters.

The only problem is what Spectra Logic’s Matt Star likes to call the Carbon Robot (the operator, that is).  One thing a tape can’t handle is being dropped to a hard floor or surface.  When that happens, all bets are off when it comes to durability, reliability, performance and capacity.  That’s why, if you drop a tape, you owe it to your company and to the technology to set the “shocked” tape to one side.  Don’t use it.  You are asking for trouble.

Seems kind of sad that, for all the high tech innovation that has been invested in the media and drives, a stupid thing like a shocked tape can screw the pooch.  Sort of Apollo 13-ish, for those old enough to remember the story or to have seen the Ron Howard film.apollo13  A simple procedure for stirring oxygen tanks causes one to explode and puts the crew in jeopardy.

Anyway, for a technology with so much going for it, such a simple thing like dropping the media accidentally, then using it anyway will shortly put up to 180 TB at risk — and that is without compression.  That is pretty bad.

So, what is required to idiot proof the media cartridge?  What will it take to let the admin or operator know when a cartridge has been dropped or otherwise shocked and should not be used.

I had the pleasure of talking about this issue with Wayne Tolliver, formerly of Spectra Logic and now CTO of his own firm, ShockSense Enterprises LLC, while I was doing standup comedy at Storage Decisions in November.  I learned about his fix for the problem and committed this interview to tape.  Here it is for you to digest.

ShockSense is a great idea:  simple in its conception and implementation, brilliant in its ease of use and non-disruptive integration into tape operations.  I love this freaking idea and I am jealous of Wayne for dreaming it up.  On the other hand, I wish him every success both in finding investors and in garnering the support of the tape industry that would be asinine if they did not leverage ShockSense technology to fix tape’s one last vulnerability.

I will be happy to pass along Wayne’s contact information to anyone who wants to explore the possibilities.  Contact me.

Special thanks to Wayne Tolliver, Chief Technology Officer for ShockSense Enterprises LLC, for his creativity and humility.  You are a credit to the American spirit of invention and innovation…as well as being a pretty great guy.

{ 0 comments }

Live. From The Jazz at Lincoln Center in New York. It’s the z13 Mainframe!

January 17, 2015

I shot a bit of video of the internals of the z13 Mainframe when I attended the launch party that IBM convened at the Jazz at Lincoln Center in New York City.  Here is what can best be called a few beauty passes for all the gear heads out there. Note:  I couldn’t get anyone […]

Read the full article →

A New Year and a New Enterprise Compute Platform: Welcome z13

January 14, 2015

 It is perhaps appropriate that my first post of 2015 covers the first truly exciting technology news of the year:  the introduction today of the latest IBM System z Mainframe, the z13. I am in New York City today and will shortly head over to the Lincoln Center to attend the formal launch event.  Expect […]

Read the full article →

Happy Thanksgiving to Our Loyal Readers

November 25, 2014

We have been busy with travel and development, working on a re-boot of the Data Management Institute to coincide with a free on-line video-based training program called Storage Fundamentals which we hope to complete by end of January.  Several sponsors have joined in and we are looking for additional support from the disk array crowd […]

Read the full article →

My Bad…

October 14, 2014

I inadvertently said that our next IT-SENSE webinar was tomorrow, Wednesday, at noon.  It is in fact on Thursday at noon EST.  I am overly energized, I guess, and anxious to get going with this discussion of the vulnerabilities that are being encouraged by agile mainframe data center hype.  Please join us on THURSDAY at […]

Read the full article →

High Noon on Agile Mainframe Data Centers

October 14, 2014

The agile mainframe data center idea is being promoted by everyone from CA Technologies to IBM to the many hardware and software peddlers that crowd the increasing number of mainframe oriented events happening around the world these days.  (Yes, I said increasing number, in recognition that mainframes are once again mainstream for a number of […]

Read the full article →

Agile: The Unicorn of Contemporary IT

October 14, 2014

There is nothing wrong with the idea of agile computing.  In fact, I would argue that it reflects a set of values and goals that have defined my role in IT over the years. Who doesn’t want an IT service that responds quickly to business needs, one that can turn on a dime as the […]

Read the full article →