Gearing Up for IBM Interconnect Day 2

by Administrator on February 24, 2015

I have been reviewing sessions today at the IBM Interconnect show in Las Vegas and have ID’d one called The New z13: Redefining Digital Business through the Integration of Cloud, Mobile and Analytics that I will attend in about an hour.  This might be a bit of a rehash of the presentations I saw at the z13 launch, which were outstanding.  I will know that a few minutes after the event begins. But, at a minimum, the session will allow me to rest my brain from all the software-defined storage stuff that has been mulling around between my ears since yesterday.

Again, to restate, I think IBM is doing pretty well thus far navigating the BS that seems to cluster around software-defined storage.  I wish everyone would just admit that SDS is rarely a proper response to virtualized application performance issues.  App performance problems in x86 hypervisors are more often the result of application code or hypervisor code or a mix of the two than of anything related to storage infrastructure.  This is obvious when you have no extraordinary storage queue depths on the virtual host.  Just because your hypervisor vendor recommends ripping and replacing your “legacy” storage to fix your VMs doesn’t mean that it addresses the root cause of the problem.

IBM to its credit is not making such a claim or reinforcing such a view.  App performance doesn’t even enter into the rationale I heard yesterday for SDS the IBM way, using Spectrum Accelerator.

It bothers me that Big Blue has elected to stand up its product inside a VMware host, though I guess that makes sense given the current market dominance of the EMC “mini-me.”  I suspect that they will expand the hypervisor compatibility at some point.

From where I’m sitting, the best SDS architecture is one that is neither hardware dependent nor hypervisor dependent.  Going forward, enterprises will likely have multiple hypervisors and some apps that aren’t virtualized at all.  What we DON’T need is a bunch of isolated islands of storage, each one dedicated to a particular hypervisor stack, plus some good old fashioned raw storage to manage as well.  From what I can glean, most virtualization admins have extremely limited storage skills to begin with.  I could be unkind and say that they don’t know storage from shinola as a rule (always wondered what the Shinola folks thought of that expression).

Anyway, the SDS craze is driving me crazy.  I have to get my mojo back, so it is off to the zSystem preso for me.  And here is my very first selfie.  I took it yesterday at the IBM Interconnect Expo.  The smart looking thing in the background is the IBM z System z13 Mainframe.

20150223_162147[1]

{ 0 comments }

IBM Interconnect Update Day 1

by Administrator on February 23, 2015

IBM Interconnect logo artBeing one of the older folks at the IBM Interconnect show, I need to confess to being a bit dazed and confused.  This show is definitely a mix of older technology norms and newer memes.  The opening keynotes were as good as any I have seen at a show of this type — a good mix of vendor thought leadership and illustrative presentations by customers.  Of course, it helps when your customers are the likes of the Mayo Clinic and Airbus.

Basically, we kept being told that everything was changing and that we old codgers needed to get on the bandwagon.  Mobile computing has already arrived.  Innovative companies are using it to get up close and personal with their customers.  For the first time, I think during a presentation this morning by Citi, I had an epiphany:  what everyone is saying is that the world has gotten faster and less personal.  The younger generation still craves the personal attention that our older kin enjoyed with their doctor, grocer, shopkeeper, baker, but the world has gotten a bit more impersonal.  We are trying to substitute technology tricks to reestablish that connection, to know our customers so well that we can anticipate their needs and wants.  Interesting.

It really hits home when I think of medicine.  There are now so many doctors and medical researchers and therapies and experimental drugs:  an almost insane amount of new information is flooding the medical practitioners and just separating the woo from the peer reviewed stuff is almost too much for anyone.  Why not use analytics to process a lot of the stuff, to help characterize a disease, to spot potential drug conflicts — not just with respect to the other 8 to 16 meds we take as we age, but with our diet, demographics and genetics?  Why not democratize medicine so that better outcomes are not limited to those who are blessed to live in an area where a few particularly astute doctors specialized in your condition also happen live.

Interestingly, a presentation on this subject by a physician with a big name oncology firm concluded awkwardly:  while the firm was anxious to work with IBM to share their knowledge and wisdom with a democratizing database and delivery system, she was pretty sure that they would still keep enough wisdom to themselves to maintain a high dollar practice!  Made me think that eventually, there would be a medical service system for the masses and another for the elite who have deeper pockets to pay for the best talent.  Oh wait.  Maybe that is the system we have today.  In any case, your next course of medical therapy may be prescribed by (or at least reviewed and validated by) some sort of Doctor Watson AI.

Okay.  So there was a lot of visionary stuff today, especially coming from the corporate customer spokespersons.  Today’s corporate leaders, one person tweeted in real time, are preaching the mantra that used to be the purview of the entrepreneurial evangelists of some silicon valley startups. True enough.

But like most folks my age, I want meat to go with the potatoes.  Vision is one thing.  How do we execute it?

I started to attend breakout sessions today and did some trolling in the Expo hall to find smart IBM folk to get me up to speed on the technology particulars that would transform the vision into reality.  It struck me as cute to see some young IT folk taking selfies of themselves with a zSytem z13 mainframe.

I found some very smart folks on the floor who talked about what their customers were saying they wanted these days and what IBM could provide out of its thick book of products and services to support them.  I managed to get Distinguished Engineer Clod Barrera to do a vblog with me that I hope to put up online soon regarding IBM’s latest Software Defined Storage play, involving their XIV software rebranded as IBM Spectrum Accelerate.  His words built on what I had learned in a good presentation by Rami Elron, Senior Staff Member and Product Technology Director for IBM XIV Storage.

Rami and Clod both steered clear of relating SDS to application performance, which was the saw that VMware used to make SDS a household name a couple of years ago.  Truth be told, most virtualized workload does not demonstrate problematic performance because of chokepoints in storage I/O, but in raw I/O processing above the storage layer.  So, while you may have many compelling reasons to consider SDS, solving what ails application performance is rarely a valid rationale.  We were spared any such correlation in IBM’s presentation.

Big Blue is clearly focused on the Enterprise Data Center and Cloud Service Providers.  They are now offering the means — with Spectrum Accelerator and IBM Hyper-Scale Manager — to deploy manage and control up to 144 clusters of XIV storage, totaling about 46 PB of capacity.  That’s good for big hyper-scale consumers.

I found myself wondering though where all of the “best business models are those that behave like small businesses” stuff that I had heard in the morning keynotes.  Where is the support for the smaller firm, or for the branch offices and remote offices that can’t afford a hyper-scale architecture requiring a minimum of three nodes and a lot of DRAM and SSD cache?  I suspect Clod is right when he asserts that over time there will be reference models advanced for various use cases covering both hyper-scale and hypo-scale implementations.  The good news is that the storage repository created with the Spectrum Accelerator technology can be used to host data from different hypervisors and from applications that aren’t virtualized at all.  And, a lot of kit is supported, so hardware dependencies are minimal.

One more thing:  I liked the attention to disaster recovery rather than a myopic focus on high availability.  From what I could glean, the Spectrum Accelerator technology delivers both, but the DR story is often left out of other SDS vendor pitches or treated like some sort of obsolete functionality.  To hear Clod and Rami tell the tale, the wedding of snapshot technology with asynchronous replication does a lot to solve the challenges of data replication over distance for DR and enables users to establish meaningful recovery point objectives (RPOs).  That is a story worth pursuing.

More to come.  This day isn’t over yet.  IBM Interconnect is shaping up to be a pretty interesting and informative event.

{ 0 comments }

z13 Momentum Building

by Administrator on February 22, 2015

I can’t remember the last time that the mainframe received the kind of attention in “mainstream” tech publications that it has in recent weeks.  Following a brilliant launch event for IBM’s new z System offering in mid-January, Big Blue has taken the show on the road throughout the world.  Able evangelists include IBM VP Kathryn Guarini, who gave us a few minutes of her time at the New York Launch Event.

We are off to IBM Interconnect now to learn more about the integration of z Systems architecture with mobile and cloud computing.  More to come…

{ 0 comments }

Tape in 2015: Still a Comer

by Administrator on January 27, 2015

do_not_feed_the_trolls_postcards-rc11657f3a8814c29ba16a55e6792edf4_vgbaq_8byvr_512I recently read a blog at Storage Switzerland that got me a little aggravated.  Even more aggravating is the fact that my recent switchover from BlackBerry to a Samsung Galaxy Note 4 “phablet” left behind my link to the irritating piece.  I have been all over George’s site, he is the proprietor of Storage Switzerland, and can’t find the blog.  I only know that it had something to do with tape and backup and possibly clouds and made some assertions I couldn’t abide.  Oh well, we shall blog war another time, Mr. Crump.  But I still want to offer some thoughts here about tape.

I noted a couple of posts back that Wayne Tolliver’s ShockSense was pretty cool tape technology — for those who didn’t watch the video, Wayne has patented a little sensor that adds on to the bar code label on a tape cartridge and provides a visual indicator when a tape has been dropped or otherwise shocked.  That is cool because, if implemented, it could eliminate the one remaining rational complaint about tape today:  the propensity of users to employ tapes that have been damaged due to improper handling, especially when the user is unaware that the mishandling has occurred.  IMHO, Tolliver is on to something with relevance not only to tape but to any kind of shock-sensitive merchandise — a simple indication of a potential issue so that remediation can be taken.

I was talking to Rich Gadomski at Fujifilm the other day and we agreed that tape made a lot of inroads into the marketplace last year.  Rich is always a wealth of information, and he didn’t disappoint this time.  He alerted me to a document that dropped in December, while I was down with the flu, from the Tape Storage Council.  It summarized the two trends that were fueling tape technology’s current renaissance and future prospects.  I found it an interesting read and wanted you to have the chance to see it here.  The proper approach would be to just link to the Tape Storage Council website and have you go there to download the document.  HERE IS THAT LINK.  (Note that this page would not open for me just now when I went there.  The site may be down for maintenance.)

The other option is to make it easy for you to download and read from my blog.  I hope this is okay with the Tape Storage Council, as I did not ask their permission.

 

2014 Tape Storage Council Memo_FINAL

I  have nits to pick with a reference or two in the document, but it provides a pretty complete summary of tape capabilities and economics that provide the technology with a long runway going forward in smart IT shops.  (I wish they would lose the references to the widely discredited Digital Universe study by IDC.  Truth be told, the growth of new digital data doesn’t drive squat.  It is the failure to manage that data, or to adopt infrastructure that replicates that data an obscene number of times, that drives storage capacity demand.)

But I digress.  The document summarizes some announcements deemed to be milestones by the Council members.  These included:

  •  On Sept. 16, 2013 Oracle Corp announced the StorageTek T10000D enterprise tape drive. Features of the T10000D include an 8.5 TB native capacity and data rate of 252 MB/s native. The T10000D is backward read compatible with all three previous generations of T10000 tape drives.
  • On Jan. 16, 2014 Fujifilm Recording Media USA, Inc. reported it has manufactured over 100 million LTO Ultrium data cartridges since its release of the first generation of LTO in 2000. This equates to over 53 thousand petabytes (53 exabytes) of storage and more than 41 million miles of tape, enough to wrap around the globe 1,653 times.
  • April 30, 2014, Sony Corporation independently developed a soft magnetic under layer with a smooth interface using sputter deposition, created a nano-grained magnetic layer with fine magnetic particles and uniform crystalline orientation. This layer enabled Sony to successfully demonstrate the world’s highest areal recording density for tape storage media of 148 GB/in2. This areal density would make it possible to record more than 185 TB of data per data cartridge.
  • On May 19, 2014 Fujifilm in conjunction with IBM successfully demonstrated a record areal data density of 85.9 Gb/in2 on linear magnetic particulate tape using Fujifilm’s proprietary NANOCUBIC™ and Barium Ferrite (BaFe) particle technologies. This breakthrough in recording density equates to a standard LTO cartridge capable of storing up to 154 terabytes of uncompressed data, making it 62 times greater than today’s current LTO-6 cartridge capacity and projects a long and promising future for tape growth.
  • On Sept. 9, 2014 IBM announced LTFS LE version 2.1.4 4 extending LTFS (Linear Tape File System) tape library support.
  • On Sept. 10, 2014 the LTO Program Technology Provider Companies (TPCs), HP, IBM and Quantum, announced an extended roadmap which now includes LTO generations 9 and 10. The new generation guidelines call for compressed capacities of 62.5 TB for LTO-9 and 120 TB for generation LTO-10 and include compressed transfer rates of up to 1,770 MB/second for LTO-9 and a 2,750 MB/second for LTO-10. Each new generation will include read-and-write backwards compatibility with the prior generation as well as read compatibility with cartridges from two generations prior to protect investments and ease tape conversion and implementation.
  • On Oct. 6, 2014 IBM announced the TS1150 enterprise drive. Features of the TS1150 include a native data rate of up to 360 MB/sec versus the 250 MB/sec native data rate of the predecessor TS1140 and a native cartridge capacity of 10 TB compared to 4 TB on the TS1140. LTFS support was included.
  • On Nov. 6, 2014, HP announced a new release of StoreOpen Automation that delivers a solution for using LTFS in automation environments with Windows OS, available as a free download. This version complements their already existing support for Mac and Linux versions to help simplify integration of tape libraries to archiving solutions.

Reading this list, I found myself recalling the scene from Ghostbusters (which was also re-released in theatres on Labor Day Weekend in 2014) after the team uses their proton packs and believes that they have destroyed Gozer:

Dr Ray Stantz: We’ve neutronized it, you know what that means? A complete particle reversal.
Winston Zeddemore: We have the tools, and we have the talent.
Dr. Peter Venkman: It’s Miller time!

So much for that little insight on my misspent youth.  Bottom line:  tape is looking pretty good these days, which is why it kind of irritates me when (1) tape is conflated with backup and (2) when the statement is made that tape is giving way to clouds.  Here’s the rub.

Tape and backup are two different things.  Just because backups have often been made to tape media doesn’t mean that the problems of backup have much to do with tape technology.  Backup was always a flawed enterprise:  backup software vendors were trying to automate the solution to data protection (data loss or corruption owing to any number of causes) and ran into just about every hurdle ever imagined.  Servers were too busy to run a data replication process, or to serve data up over an I/O port.  Applications, when operating, didn’t allow data to be copied at all.  Lots of servers introduced a need to superstream data to the tape target, and as shorter tape jobs finished, the superstream unraveled, extending the time required to take the backup.  Data could not be submitted to the tape drive at the jitter-free and persistant clip that the drive wanted.  The list goes on.  None of these things had anything to do with tape, but with the backup application and the way it interoperated with the production environment.  Conflating the two makes me mad.

Truth is, today you don’t need backup software.  Oops.  I said it.  With LTFS from IBM, you could just copy the entire file system directly to tape media without specialty backup containers.  With object storage, you could simply write object name and metadata into one track of a tape and the corresponding data to another track — LTFS on steroids.

Anyway, hating on backup has always been leveraged by the disk (and now some of the flash kids) to hate on tape technology, much in the way that VMware blamed hosted application workload performance issues on “legacy disk” — another assertion that fails the smell test.  It should stop and analysts need to stop taking money from disk guys to say that backup problems are due to the inadequacies of tape technology.

Cloud replacing tape is another bogus assertion.  For one thing, the industrial farmers of the cloud world — Amazon, Microsoft and Google — all use tape in the course of operating their storage clouds.  Google was reluctant to admit it for the longest time, but I heard a great presentation by one of their tape mavens at Fujifilm’s conference last September in NYC, and the guys at The Register did a great write-up on it. (HERE)

Moreover, there are now cloud storage services, including dternity, that specialize in using tape technology for cloud storage and archiving.  Officially announced at the NAB show in 2014, here is a video interview shot at that event…

 

 

I am planning to head out to NAB this year and to give a talk around tape.  I look forward to hooking up with the folks at Fujifilm, dternity and perhaps a few others to see what tape mongers will be bragging about this year.  For now, assertions that clouds kill tape are just as stupid as the other tape is dead lines we have heard throughout the years.

Watch this space.

{ 1 comment }

Again with the Data Lifecycle Management

January 20, 2015

Okay, so I am being swept up a bit by the pro-French meme that has found its way into social media of late.  I prepared a slide deck a while back called Les Data Miserables that my sponsor asked me not to use during my last trip to Europe, so I have some art that is available for […]

Read the full article →

z13 Virtual Machine Hosting: Still a Bit Cloudy

January 20, 2015

Okay, so I have been doing something of an excited jig around the new IBM System z mainframe, the z13.  I am impressed by how much performance, capacity and durability the IBMers have engineered into the rig.  And I am impressed by the economics we have been told to expect from the box.  Imagine standing […]

Read the full article →

Still Loving Tape in 2015

January 17, 2015

Regular readers of this blog know probably too well my long standing affection for tape technology.  I know.  I know.  Cloud (mostly disk-based) storage is getting pretty cheap as the price wars between the industrial farmers in the cloud — Google, AWS and Microsoft — continue to exert downward pressure.  That’s all good, until it […]

Read the full article →

Live. From The Jazz at Lincoln Center in New York. It’s the z13 Mainframe!

January 17, 2015

I shot a bit of video of the internals of the z13 Mainframe when I attended the launch party that IBM convened at the Jazz at Lincoln Center in New York City.  Here is what can best be called a few beauty passes for all the gear heads out there. Note:  I couldn’t get anyone […]

Read the full article →

A New Year and a New Enterprise Compute Platform: Welcome z13

January 14, 2015

 It is perhaps appropriate that my first post of 2015 covers the first truly exciting technology news of the year:  the introduction today of the latest IBM System z Mainframe, the z13. I am in New York City today and will shortly head over to the Lincoln Center to attend the formal launch event.  Expect […]

Read the full article →

Happy Thanksgiving to Our Loyal Readers

November 25, 2014

We have been busy with travel and development, working on a re-boot of the Data Management Institute to coincide with a free on-line video-based training program called Storage Fundamentals which we hope to complete by end of January.  Several sponsors have joined in and we are looking for additional support from the disk array crowd […]

Read the full article →