- casino online slots

Watch Out, INTEL. Computation Defined Storage Has Arrived.

by Administrator on February 23, 2017

In a few hours, there will be a crescendo of noise around, of all things, a hardware platform. Yup, in these days of disdain for all commodity hardware and widespread embrace of software-defined everything, a major hardware event is about to happen.

The evangelists for the new tech are three faces that have been around the storage industry for about 30 years: Brian Ignomirello, CEO and Founder of Symbolic IO, Rob Peglar, Symbolic’s Senior VP and Chief Technology Officer, and Steve Sicola, Adviser and Board Member of the company. Together, they are introducing an extraordinary advance in server and storage technology that could well change everything in the fields of high performance computing, silicon storage and hyper-converged infrastructure. They call their innovation “Iris.”

Iris™ stands for INTENSIFIED RAM INTELLIGENT SERVER and it is trademarked for good reason.

Under the hood, there is so much intellectual property that I had to sign a pile of NDAs just to get an advanced look when I flew to Symbolic IO headquarters in what used to be Bell Labs last week. Fortunately, you don’t need the non-disclosure because, as of midnight tonight, Iris is going to get a lot of exposure from the usual news outlets and analyst houses.  (It goes general availability next month.) 


Simply put, Iris changes the game on so much of what we take for granted today in computer design, server architecture and storage operations. Collectively, the innovations in Iris, which have been in development since before the company’s formal founding in 2012, stick a hot poker in the eye of INTEL, NVMe, and the whole HCI crowd.

With the introduction of Iris, it is as though server and storage technology just went through what Gail Sheehy called a “passage” or Erickson, Piaget and Kohlberg termed a “stage of psycho/social development.” Just as healthy humans move through stages in life, usually signaled by a crisis, in which they reconsider past assumptions, discarding those acquired from parents, peers and society that do not seem to have relevance to them and embracing new truths and directions for the future, so it is with Iris and the tech industry.

The crisis is real. Things are in disarray for tech consumers and vendors alike. We are creating data much faster than we can create the capacity to store it with current technology. We want to be able to share and collaborate using data, but the latencies of reads, writes and copies are getting in the way and hypervisor virtualization has stressed out the IO bus. We grasp at straws, allowing INTEL to define NVMe as a de facto standard because vendors want to push silicon into data centers tomorrow and relying on each flash storage maker to define their own device drivers and controller logic was delaying adoption, compromising vendor profitability and exposing the whole silicon storage market to rampant balkanization.

Iris is what happens when the crisis above forces good engineers to question old assumptions and to discard those that no longer apply. For example…

  • Why are we using simplistic common binary to store (logically and physically) bits on storage media? Why not use a more elastic and robust algorithm, using fractals for example, to store more data in the same amount of space? That is analogous to the way data is stored using DNA, which packs far more content into a much smaller space.
  • Why are we pushing folks to deploy flash memory on a PCIe bus and calling that a “huge improvement” over installing flash behind a PCIe bus-attached SAS/SATA controller? While doing so yields a performance improvement, isn’t that the same dog, with different fleas? Why not put storage directly in the memory channel instead?
  • Why do we continue to use cumbersome and self-destructive file systems that overwrite the last valid copy of data with every new save, a reflection of a time when storage cost several hundred thousand dollars per gigabyte? Why not use a richer recording algorithm that expedites first write, then records change data for subsequent versions in a space optimized manner?
  • And in these days of virtual servers and hypervisor computing, why don’t we abandon silos of compute and storage created by proprietary hypervisors and containers in favor of a universal, open workload virtualization platform that will run any virtual machine and store any data?
  • And finally, why pretend that flash is as good or as cheap as DRAM for writing data? Why not deliver write performance at DDR4 speeds (around 68 GB/second) instead of PCIe G3 throughput speeds of 4.8 GB/second)?

Ladies and gentlemen, welcome to Iris. Those who read this blog regularly know that I am as critical of proprietary hardware as the next guy and have welcomed the concept, if not always the implementation, of software-defined storage as a hedge against vendor greed. But, from where I am standing, this “Computation Defined Storage” idea from Symbolic IO has so much going for it, I can’t help but find myself enamored with the sheer computer science of it.

They had me at I/O.   But they are holding my attention for the many other innovations that they have put into the kit, including a remarkable, new DRAM-3D NAND hybrid storage target, a rich open hypervisor, an OS that changes the game with respect to data encoding and data placement, and a REALLY COOL technology for data protection via replication called BLINK. 

Watch this space for more information about Iris.


Is It 2017 Already?

by Administrator on February 23, 2017

Like the old saying goes, “Time flies when you’re having fun.”  I am not sure whether it has all been fun, but I have been extraordinarily busy for the past few months…as my absence here might suggest.

There were ups and downs as 2016 came to a close.  We have had two really good day-long (or many hour long) web-based workshops with 1105 Media and Virtualization Review covering Data Protection and Disaster Recovery in the Cloud Era — based on the Data Management Institute’s Certified Data Protection Specialist™ (CDPS™) courseware — and another covering Cognitive Data Management — based on DMI’s Certified Data Management Professional (CDMP) certification training.  To our delight, each workshop had over 800 registrations and up to 300 attendees for the full expanse of the training program.  We are working on two more for presentation in the coming months:  one in April on Data Protection and Data Security, the second on Data Archiving.  Watch this space.

We had some challenges too in these recent months.  For one thing, a UPS exploded in my office, starting a small fire and spraying walls and desks with lead acid.  Does anyone here know what a couple hundred TBs of storage weigh?  I threw out my back lifting and hauling servers and storage into another room so we could get things cleaned up, repainted, etc.  I also took the opportunity to begin redesigning my websites.  Data Management Institute ( and Toigo Partners International ( should be up and running by end of month.

I was gifted a couple of decommissioned arrays and worked through the holidays to rebuild them, then to virtualize them with DataCore SANsymphony-V.  We are still plodding along in that endeavor, owing to the need to migrate older data and to find controllers for some of these arrays.  eBay has been a treasure trove.

Of course, the biggest setback was the loss a few weeks ago of one of my mentors and friend of this blog, Ziya Aral.  Ziya and I have been acquainted since 1997, when he helped to form DataCore Software.  As engineers go, he was one of the best — delivering technology solutions that were ahead of their time and reviled by long-standing fixtures in the storage hardware world.  To his credit, not only did he presage the current fascination with software-defined and hyper-converged storage, but he also rocked the storage world with his innovative adaptive parallel I/O technology that made commodity kit run like a supercomputer.  Ziya was one of those guys who wasn’t just about the benjamins; he actually had the needs of consumers in mind and never forgot his roots.  I am missing him badly and wish his family, and his many friends, well as they work through their grief.



Ziya would not have wanted me to end this post on a somber note, so here are some points that are raising my spirits.  First, I like the way that many companies have been deconstructing the idea of software-defined storage.  Ziya and the gang had already done violence to the VMware/Microsoft ideas of what SDS storage was by including functionality in their SDS stack for RAW IO acceleration (their aforementioned adaptive parallel IO technology) and for virtualizing storage mount points.  These two innovations allowed their technology to be completely hardware and hypervisor agnostic.

Acronis came to market a month or so back with an announcement of their own SDS stack that also features BlockChain support.  BlockChain is one of the coolest ideas I have heard in the cloudy world in some time.  IBM is working furiously on it — which is a big part of the reason why I will be going back to IBM Interconnect in Vegas at the end of March.  Some of you should register for that event if you have time.  It is well worth the cycles.  Click the graphic to register and look for me outside the venue shooting more video interviews with IBMers.



I have also become very enamored with Strongbox Data Solutions (SDS) out of Canada.  Their cognitive data management play, StrongLINK™, is way ahead of competitors and I really like what they are doing to bring capacity utilization efficiency to storage infrastructure.  We need to get a lot better at managing bits if we are going to cope with the deluge of them — 10 to 60 Zettabytes worth — in a matter of three years.  David Cerf and his crew are doing a yeoman’s job of creating an Uber-controller for unstructured data that works across all storage types and all file and object types.  Very cool.  I will be writing a lot more about it shortly.

Finally, I have recently been exposed to a new hardware technology that I see as a potential game changer for servers and storage — quite possibly the first intelligent hyper-converged cobble in the market.  The product is called Iris™, it is from Symbolic IO, and I will post about it next.

Back to the grind.


Halloween has passed.  In its wake is an inevitable sugar crash.  If you have kids, you welcome this side effect to curb all of that frenetic energy and to provide a spate of peace and quiet.  In IT, a sugar crash often follows the acquisition of new gear or the implementation of a new application or OS.  During this brief period of calm, we settle into the new kit or process or workflow, content that we have the shiniest new thing until the next shiny new thing.

In some cases, we start to second guess the decision we just made.  Did we really need that shiny new storage box?  Could we have saved ourselves the money if we had just gone around and cleaned up all of the worthless data we were storing and bought back that capacity?  Seriously, at least 70 percent of the data we are storing is a combination of archival data that belongs on tape, not on flash or disk, and contraband, orphan and copy data that can be greatly reduced through good data hygiene.  Some would say that those kinds of thoughts are instances of second guessing, buyer’s remorse, premarital jitters even.  I say they are sensible and that we ought to heed them before we go out and buy new stuff.

I was reminded of that when I ran across my friend of decades, Ken Barth, at IBM Edge 2016.  Ken is currently honcho-ing up Catalogic Software, whose flagship product cleans up the mess left by data protection processes that generate and leave in their wake ceaseless copies of data, for protection or sharing, filling our precious storage capacity.  I was delighted when Ken agreed to do an interview with me to post here.  After listening to what he had to say, I am sure you will find him to be a treat — perhaps so sweet that his copy management software will move you from sugar crash to sugar rush!


Catalogic is an IBM Business Partner.  Their former CEO, Ed Walsh, recently moved over to IBM to assume a senior position in storage there, so Ken, a major investor and former CEO of companies like Tek Tools, stepped into the management role.  Expect great things from this energetic evangelist in the days ahead.

Thank you to IBM for having me as a guest at their soiree.  They covered my transportation and lodging at IBM Edge 2016, and remunerated me for live tweeting their general sessions.  This video blog is my own work and does not necessarily reflect the views or opinions of Big Blue.




zpocalypse2Just in time for Halloween, long time friend Ed Childers, who also happens to be IBM’s LTFS Lead Architect and Tape Development Manager, agreed to be interviewed at this year’s IBM Edge 2016.  Ed caught us up on all things tape, from the realization (finally) of the long predicted Renaissance in tape technology to the latest developments in tape-augmented flash and disk storage.  Have to admit it, Ed is my brother from another mother.

Childers has been doing a Rodney Dangerfield impersonation for the last could of years — tape just wasn’t getting the respect it deserved from the user community or the industry.  But with the “zettabyte apocalypse” around the corner, tape is suddenly very sexy.

Regular readers will recall that the zpocalypse to which we refer isn’t a Halloween novelty, it is real.  According to leading analysts, we are expecting between 10 and 60 zettabytes (1 zettabyte = 1000 exabytes) of new data to hit our combined storage infrastructures by 2020.  This has cloud farmers and large data center operators quite concerned.  Back of envelop math says that only about 500 exabytes of capacity per year can be manufactured by all flash chip makers collectively, while output from disk makers hovers somewhere around 780 exabytes per year.  Taken together, that totals less than 2 percent of the capacity required at the upper limit of projected data growth.



The only way that we will possibly meet the demand for more storage is by using tape.  With 220 TB LTO Ultrium cartridges within striking distance, smart cloud and data center operators are already exploring and deploying tape technology again.  Ed is now officially the guy who women want to meet and men want to be.  Here are some of his observations.


Thanks, Ed Childers, for taking the time to chat with us.  And thank you to IBM for inviting us to attend Edge 2016 and for allowing us to use some of the availability of your best and brightest in these video blogs.

For the record, IBM covered the costs for our attendance at IBM Edge 2016 and they gave us a small stipend for live tweeting their general sessions.  The content of these video blogs and other opinions on this post are ours exclusively.

For those who are not familiar with our take on the zpocalypse, here is a refresher, staring Barry M. Ferrite…


And here is the follow-on video…



Thanks again to Ed and to IBM. Great show, that IBM Edge!



Vblogs from the Edge: Zoginstor talks Software-Defined Storage

October 31, 2016

When you talk to someone who wears the handle “Vice President of Storage and Software Defined Infrastructure Marketing,” you would think that you are going to get an earful about SDS and hyper-converged infrastructure, and maybe some hype about how it is the shiniest of shiny new things.  However, IBM’s Eric Herzog (known on Twitter as @Zoginstor) […]

Read the full article →

Vblogs from the Edge: Stouffer Compresses Some Thoughts

October 31, 2016

Brandishing a title like Director, Storwize Offering Manager and Business Line Manager, one would expect Eric Stouffer to spend his 15 minutes of fame (about the time it takes to shoot a short interview) waxing philosophical about the benefits of data compression (what Storwize technology is all about).  The interesting thing about Stouffer is that […]

Read the full article →

Vblogs from the Edge: IBM Storage in Nordic Shops

October 31, 2016

At IBM Edge 2016, I had the good fortune to cross paths with Mathias Olander, a Software Defined Solutions Sales Representative for Big Blue in the Nordic region.  A brief discussion about the appetite for software-defined technologies in Northern Europe became a bit more philosophical, perhaps because of the frequent pauses we needed to take in […]

Read the full article →

Vblogs from the Edge: IBM Spectrum and Data Management

October 31, 2016

Those of you who are following my writing these days may have noticed that I am very interested in the topic of data management, especially when the word “cognitive” is inserted as an adjective ahead of the expression.  Like so many concepts introduced by IBM years ago, the idea of data management is coming back […]

Read the full article →

The Ephemeral Cloud

October 27, 2016

A bunch of years ago, IDC and Gartner promised that if we were all good little girls and boys and said our prayers every night and abandoned legacy infrastructure for clouds, CAPEX spending for IT would all but end.  Plus, there would be peace on earth, a chicken in every pot, a corner office for […]

Read the full article →

Getting Continuity Planning Right…

October 27, 2016

Having a bit of experience in disaster recovery planning, I have often commented on the failure of the industry to get its collective act together and to combine the discipline of security planning with the continuity practice.  For a number of reasons, an artificial distinction has settled in that, frankly, makes sense only to those […]

Read the full article →