A bunch of years ago, IDC and Gartner promised that if we were all good little girls and boys and said our prayers every night and abandoned legacy infrastructure for clouds, CAPEX spending for IT would all but end. Plus, there would be peace on earth, a chicken in every pot, a corner office for every mid level manager who made the IT department unplug all of its storage and servers and throw them into the cloud. The hype was deafening, and for many firms the results expected — greater agility, greater resiliency, and lower cost — have failed to materialize.
Call me a skeptic, but I originally saw clouds as a re-branding of ASPs and SSPs — outsourcing by another name. As the concept took hold, however, I have played the role of loyal opposition, identifying the foibles in cloud strategies, the misconceptions introduced by overly enthusiastic cloud marketing folk who don’t know technology from shinola. On November 16, DataCore has given me a platform to talk about clouds and what is really needed to make them work minimally well so that they can actually deliver on promises.
For anyone who cares, the event is a free BrightTalk webinar that you can sign up for HERE. I hope some of you can attend. In addition to my curmudgeonly viewpoint, you will hear from DataCore’s very smart Augie Gonzales about the real requirement for cloud enablement: an effective storage virtualization technology. Be there or be square.
Having a bit of experience in disaster recovery planning, I have often commented on the failure of the industry to get its collective act together and to combine the discipline of security planning with the continuity practice. For a number of reasons, an artificial distinction has settled in that, frankly, makes sense only to those who wish to sell software or hardware for security and access control. DR planning has very little in the way of administrative software per se, though there are a ton of data protection products for backup, snapshot, continuous data protection, mirroring, etc.
The good news is that some folks are starting to wake up. The cloud-era continuity plan will cover that space in the venn diagram where the circle marked disaster recovery planning intersects with the circle called security planning. That is what I will be arguing on November 29 in a webinar I will be doing with Redmond Magazine called “The Expanding IT Attack Surface: How to Protect Your Data Assets from Loss, Damage or Ransomware.” REGISTRATION is here and, of course, the event is free thanks to sponsors Arcserve and KnowBe4. Hope some of you can attend.
I am also delighted to be doing another Data Management and Data Protection Workshop for CA Technologies in Herdon, VA a week or two prior to the webinar, and to be presenting on Mainframe DR Requirements at CA World in mid November as well. Among the topics I will be discussing is some cool software from CA Technologies that can peruse data on the mainframe, which is now demonstrably as vulnerable as open systems, to discover where elements requiring special protection (given regulatory mandates or industry standards) are located so encryption and other protective services can be selectively and intelligently applied.
At the end of the day, DR and security are twin sons of different mothers. They are both required to help prevent the most common generators of downtime in IT today: localized hardware faults and logic errors including virus code, malware and ransomware.
I received a call a couple of days ago from a New York Times writer on deadline asking what I thought about the breakthrough in storage being hailed by some researchers at the City University of New York. They were talking about storing data in diamonds. Crappy diamonds with a lot of imperfections. But diamonds still.
HERE is the NYT article. I suppose I was a bit more curmudgeonly than usual. After all, I am working with clients who are desperately trying to find practical solutions to coping with the data deluge…and the data apocalypse that analysts place just down the road in 2020. It would be great if diamond storage, or DNA storage, or carbon nanotubes, or that tech we have been awaiting since the Kennedy administration — holographic storage — would be commercially viable in time to handle the 60 zettabytes of data we will need to find a way to store in just a couple of years. Sadly, I think all of these technologies will come too late to store all the bits and pixels.
Right now, two key ingredients for weathering the zettabyte apocalypse are tape — with Barium Ferrite coatings, LTO has already been demonstrated by IBM and Fujifilm to have a capacity with current technology of 220TB per cartridge uncompressed — and cognitive data management — which involves the automation of things like intelligent tiering and data management keyed to data value and compliance requirements.
I have a workshop scheduled for January with Virtualization Review on the latter topic and I will be talking tape (again) at CA World in a couple of weeks. Hope to see some of you there.
Readers of this blog may recall past entries in which we argued that mainframes were, by all coherent definitions of the term, “clouds” in and of themselves. This message seemed to be getting more play while we were at IBM Edge 2016 at MGM Grand a month or so back…especially from Steven P. Dickens, Senior Offering Manager for IBM Cloud for z Systems and LinuxONE.
Here is a edited version of a rather extensive chat I had with Mr. Dickens regarding IBM’s plans to deliver “Mainframe as a Service” in a cloud. As always, his commentary was on point and very Brit.
I suppose I am something of a mainframe bigot, sensing as I often do that the “latest and shiniest new technologies” are more often than not just a rediscovery of things we have done on mainframes for years. Virtualization? Been there. Software-defined storage? Done that. Multi-tenancy? Yawn. Cloud? Did it 30 years ago. Happy that the newbs are discovering all of this stuff again…for the first time!
Thank you to Steven for participating in this interview. And thank you to IBM for providing me with transport, lodging and free registration to the outstanding Edge 2016 event. While IBM funded these vBlogs, the responsibility for and ownership of edited content is entirely my own.
Following up on the previous post, we were delighted to travel to Berlin in late May to interview members of ASCDI, an organization for secondary IT equipment sellers that convened its annual worldwide meeting in that historical backdrop. ASCDI is doing the proverbial “Lord’s work” by setting a high bar for vendors in the secondary […]
This past Summer, I toured Europe and visited with long time friends at ASCDI at their world conference in Berlin. ASCDI is an association for secondary market vendors who trade mostly in off-lease server, network and storage gear. Some vendors work on financing purchases while others provide equipment and/or service and support. Overall, the organization […]
In two days, I will be jetting up to Chicago (Rosemont, actually) for the PRISM International and Data Protection Association’s 2016 International Data Management Conference. You can learn about it HERE. I am speaking on Storage Trends. Hope to make it a bit entertaining so that these nice folks will invite me back another time. […]
This Summer has been the busiest I have had in nearly 10 years. I was all over Europe in May, after IBM Interconnect, then working on numerous projects, ranging from a supercast on Hyper-Converged Infrastructure appliances, to a day-long training course on Continuity Planning in the Cloud Era. That webinar, which you can replay HERE, […]
Before anyone gets the wrong idea, no, I have not changed my policy of not allowing advertising on my blog. I have agreed, however, to pass along some information for Rosewill, maker of great docking stations, drive enclosures and HDD docks that I use a lot in my own testing work. They are having a […]
If you are like me, you are still trying to get your head around the whole software-defined storage thing. Some folks insist that it is an entirely new architecture, but I remember running into one of the folks who manages IBM’s System Managed Storage at their IBM Edge show last year who shook her head […]