Miss Me?

by Administrator on September 3, 2014

August was a busy month and just whipped by.  I did several webcasts for 1105 Media/Virtualization Review/Redmond Magazine.  Wrote a bunch of articles and columns for the trade press and took a staycation that found me mending and painting and building — basically catching up on the honey-do list.

Now that September has arrived, I am busting hump on the Fall workload, which is always the biggest pile of work in any given year.  Next week, I will be in NYC for Fujifilm’s Global Executive IT Summit.  Then its off to Paris, then London, then back to NYC for Storage Decisions.

I hope to get involved in an upcoming IBM event, and I was intrigued by this little pitch.

mainframemobileapp

 

IBM is trying to encourage customers to show off their app writing skills, and how they are using social media and mobile applications to interface to their mainframe kit.  As someone who Tweets from a 3270 console, I’m all over that.  Anyway, to learn more visit this BLOG on the topic.  I am told that this contest is open until September 17.  Winners have a shot at being invited to IBM Enterprise 2014 with a complimentary pass.

If I were tweeting this:  Win an #iPad + a complimentary pass to #IBMEnterprise. Enter the #Mainframe #Mobile App Throwdown. Here’s how: http://ow.ly/AJSuj

Cheers, and good luck.

{ 0 comments }

Finally, A Sensible Statement About Clouds

by Administrator on July 29, 2014

th6SMFOHCVAnd from an unexpected source.  A recent comedy starring Cameron Diaz and Jason Segel, Sex Tape, builds a comedic misadventure on the inadvertent upload to a cloud service of a explicit digital video recording made by a middle aged couple seeking to spice up their love life.  After uploading, the husband discovers that he can’t delete it, because “nobody understands the cloud.”

Painfully true but rarely stated aloud.  You’ll laugh til you cry.

This is so cool.  I never get to do a movie review.

 

{ 0 comments }

Heads Down Getting Ready for SHARE 2014

by Administrator on July 29, 2014

csrqjsbl

I will be talking about Disaster Recovery Requirements in the Agile Mainframe Data Center on Friday, August 8, in Pittsburgh. The talk has been selected for live streaming, so the world will be able to tune in. If you are interested, here is a link to register for SHARE. LINK

I am going to count down some myths embedded in the evangelism around/marketing of cloud, software-defined, and virtual data center and stressing that the requirements for comprehensive disaster recovery planning haven’t changed one iota.  I strongly recommend getting to know the 21st Century Software DR/VFI tools for getting a handle on data protection and disaster recovery in mainframe shops.

Just my 2 centavos.

 

{ 0 comments }

Data ReductionI know, I know. As a tech writer and blogger, I should be immune to the occasional pushback from readers who hold views that differ from my own. I usually have a pretty thick skin, but I really hate it when I take the time to respond to commenters only to have the comment facility into which I type my response fail when saving my work. This just happened when I spent 35 minutes writing a detailed response to the many comments received in response to an article I had written in January on de-duplication and compression for TechTarget’s SearchStorage.

While one or two comments agreed with my perspective, several commenters disagreed vehemently or sought to add other perspectives that weren’t part of my coverage.  Aside from being accused of shoddy reportage by a commenter who referred to himself as TheStorageArchitect, most of the criticisms stayed on topic.  I spent quite a bit of time crafting a response and, had the response facility worked, here is a synopsis of what I wrote.

First, a bit of background.  The article was part of a series of tips on storage efficiency.  I had argued in the series that storage efficiency came down to managing data and managing infrastructure so that we can achieve the twin goals of capacity allocation efficiency (allocating space to data and apps in a balanced and deliberate way that prevents a dreaded disk full error from taking down an app) and capacity utilization efficiency (allocating the right kind of storage to the right kind of data based on data’s business context, access/modification frequency, platform cost, etc.).

In this context, I argued that some vendor marketing materials and messages misrepresent the value of de-duplication and compression — technology that contributes on a short term or tactical basis to capacity allocation efficiency — as a means to achieve capacity utilization efficiency.  I wasn’t seeking to join the tribalistic nonsense out there, claiming that XYZ vendor’s de-dupe kit was better than vendor ABC’s de-dupe kit.  My key points are as follows.

  1. De-duplication remains a proprietary, rather than a standards-based, technology.  That makes it a great value-add component that hardware vendors have used to jack up the price of otherwise commodity hardware.  I cited the example of an early Data Domain rig with an MSRP of $410K for a box of about $3K of SATA drives whose price tag was justified on the basis of a promised data reduction rate that was never delivered or realized by any user I have interviewed.  That, to my way of thinking, is one deficit of on-array de-duplicating storage appliances and VTLs.  It would be alleviated to some degree when de-dupe is sold as software that can be used on any gear, or better yet as an open standards-based function of a file system, mainly so that users avoid proprietary vendor hardware lock-in.  By the way, in response to one commenter, even if it is true that “all storage hardware companies are selling software,” I prefer as a rule to purchase the storage software functionality in an intelligent way that makes it extensible to all hardware platforms rather than limiting it to a specific kit.  That, to me, is what smart folks mean when they say “software-defined storage” today.
  2. De-duplication is not a long term solution to the problem of unmanaged data growth.  It is a technique for squeezing more junk into the junk drawer that, even with all of the “trash compacting” value, will still fill the junk drawer over time.  From this perspective, it is tactical, not strategic, technology.
  3. The use of proprietary de-dupe technology mounted on array controllers limited, in many cases, the effect of de-duplication only to data stored on trays of drives controlled by that controller.  Once the box of drives with the de-duplicating controller was filled, you needed to deploy another box of drives with another de-duplicating controller that needed to be managed separately.  I think of this as the “isolated island of de-dupe storage” problem. Many of my clients have complained about this issue.  Some commenters on the article correctly observed that some vendors, including NEC with its HydraStor platform, had scale-out capabilities in their hardware platform.  True enough, but unless I am mistaken, even vendors that enable the numbers of trays of drives to scale out under the auspices of their controller still require that all kit be purchased from them.  Isn’t that still a hardware lock in?  My good friend, TheStorageArchitect, said that I should have distinguished between active de-dupe versus at-rest de-dupe.  He has a point.  If I had done so, I might have suggested that if you were planning to use de-dupe for something like squashing many full backups with a lot of replicated content into a smaller amount of disk space, an after-write de-dupe process, which can be gotten for free with just about any backup software today, might be a way to go.  But I would also have caveated that, if running a VTL was intended to provide a platform for quick restore of individual files, using a de-duplicated backup data set might not be the right way to go since it would require the rehydration of the data on restore, introducing some potential delays in file restore.  The strategy of at-rest dedupe in the VTL context also has me wondering why you wouldn’t use an alternative like LTFS tape or even incremental backups.  As for in-line or active de-duplication, he stole my thunder with his correct assertion of the CPU demands of global de-duplication services.  But I digress…
  4. My key point was that real capacity utilization efficiency is achieved not by tactical measures like data reduction, but by data management activities such as active and deep archive and the like.  Archives probably shouldn’t use proprietary data containers that require proprietary data access technologies to be opened and accessed at some future time.  Such technologies just introduce another set of headaches for the archivist, requiring data to be un-ingested and re-ingested every time a vendor changes his data reduction technology.  This may change, of course, if de-dupe becomes an open standard integrated into all file systems.

I may have missed a few of the other points I made in my response to the comments on the TechTarget site, but I did want to clarify these points.  Plus, I will offer to those who said that my claims that the star was fading over de-dupe were bogus, I can only offer what I am seeing.  Many of my clients have abandoned de-duplication, either after failing to realize anything like the promised data reduction value touted by product vendors, or because of concerns about legal and regulatory permissibility of de-duplicated data.  While advocates are quick to dismiss the question of “material alteration” of data by de-dupe processing, no financial firm I have visited wants to be the test case.  That you haven’t seen more reportage on these issues is partly a function of hardware vendor gag-orders on consumers, prohibiting them under threat of voided warranties, from talking publicly about the performance they receive from the gear they buy.

If you like de-dupe, if it works for you and fits your needs, great!  Who am I to tell you what to do?  But if you are trying to get strategic about the problem of capacity demand growth, I would argue that data management provides a more strategic solution than simply putting your data into a trash compactor and placing the output into the same old junk drawer.

 

Post Post

I am informed that my rejected comment submittal has miraculously appeared in the appropriate section of the TechTarget site.  No one understands what happened or how it resolved itself, but there it is.

Second, the original version of this post named the fellow associated with the handle, TheStorageArchitect:  Chris Evans.  (No, not Captain America.  The other Chris Evans.)  But Chris, who I follow on Twitter, advised he that he did not make the post.  So, I have redacted his name from the original post here.  Apologies to Chris for the incorrect attribution.

 

 

{ 0 comments }

Linear Tape File System: The Only Real Software-Defined Storage?

July 8, 2014

  At the IBM Edge 2014 conference in Las Vegas this past May, I had the chance to reconnect with the IBMers most responsible for the development and marketing of the Linear Tape File System (LTFS). I had interviewed Ed Childers, who hails out of IBM Tucson, before. He has been my go-to guy on […]

Read the full article →

Storage for the Tragically Un-Hip

July 2, 2014

  I have been hearing from readers recently who, while complimenting me on my practical articles on topics such as storage efficiency, common sense storage architecture, and so forth, point out that I am covering topics that the “hip and cool” generation of IT folk don’t really care about. For example, I have just completed […]

Read the full article →

Stupid Pundit Tricks Spill Over into Tech

June 24, 2014

I have been watching with equal parts amusement and disdain as Congressman Issa’s interrogations have proceeded regarding the IRS’ selections of which 501(c)(4) organizations to scrutinize before granting tax exempt status. The amusing part has been, of course, the dazzlingly inane politicization of the event.  It was discovered that a couple of IRS centers were […]

Read the full article →

Sorting Out the File Junk Drawer Replay Ready

June 24, 2014

At the beginning of the Summer, we rolled ut of the IT-SENSE.org Brown Bag Webinar Series, as those of you who joined us already know. We did three shows in three weeks, working to keep your commitment of time to around 45 minutes and our commitment to delivering useful information at 110%. We have paused […]

Read the full article →

Tarmin: It’s All About the Data

June 16, 2014

At the just concluded IBM Edge 2014 conference in Las Vegas, one of the highlights for me was having a chance to catch up with Linda Thomson, Marketing Director for Tarmin. I knew Linda from her time at QStar Technologies, where she ran worldwide marketing, and I had been hearing positive things about Tarmin’s GridBank technology for a […]

Read the full article →

VSAN in a Nutshell

June 13, 2014

If you agree that shared storage is better than isolated islands of storage, that some sort of virtualized SAN beats the socks off of server-side DAS, that hypervisor and storage hardware agnosticism beats hypervisor or hardware lock-in’s, that aggregated storage capacity AND aggregated storage services make for better resource allocation than service aggregation alone, and […]

Read the full article →