https://slotsdad.com/ - casino online slots

Actifio and DataCore Software: A Tale of Two Solutions

by Administrator on June 26, 2012

Had an introductory briefing this AM with Actifio, a company brought to my attention by IBM.  Their management team includes lots of folks I have known (or known of) over the years from EMC, Storage Networks, Dell, FalconStor, etc.  They have $57 million in venture money to chase what they describe as a $34B market opportunity.  Their goal is to do something about the 50% of disk that is being used to make copies of data stored on the other 50%.  In other words, the Lord’s work.

They showed me lots of graphs on storage growth and capacity utilization inefficiency, which typically endears me to a vendor.  (Oh look, isn’t that cute:  a storage vendor that wants us to manage our junk drawers better so they don’t have to sell us so much disk…)  And they have a sexy GUI (developed by a New York designer instead of the same folks who do GUIs for everybody in the Boston area…) that wrangles a bunch of data reduction and replication technologies into a single kit.  Beyond the obvious, that they are trying to set up a service engine that will enable the application of data protection and reduction policies for every workload detected and manage them from a common interface, the presenter wasn’t technical and couldn’t drill down into explanations of the terminology he was using — like object file system.  He suggested that their box used an object model for data management that sounded good, but he didn’t have the details.  Another call is being set up to chat with his technical team.

After his presentation, I asked him who he sees as his competition.  I got the response that always sends a red flag up the pole for me:  We have none.  Surely, he saw single-pane-of-glass data protection solutions like Continuity Software’s RecoverGuard, or Neverfail Group’s Neverfail, or CA Technologies’ Replicator (formerly XOsoft) as competitors?  It seemed he wasn’t familiar with these products.

As for aggregating all data protection functions across infrastructure, I wondered if he saw DataCore Software’s SANsymphony V as a competitor.  He said that they haven’t encountered them in competitive bids thus far.  I suspect he may be correct in the large enterprise accounts and cloud service provider accounts that the company has initially targeted for its sales, but that is about to change, IMHO.

DataCore released version 9 of SANsymphony-V today.  Their boss is making the rounds in Europe right now, talking to media and partners and making as much noise as he can.  I use SANsymphony-V today and I like what has been done in Release 9.  In some respects it seems to be ahead of Actifio’s curve and should make for an interesting use case comparison.

I dropped SANsymphony over all of the storage in my labs about two years ago.  I had a lot of heterogeneous hardware, including FC connected and iSCSI connected rigs, and after temporarily moving data off of each rig, allowed DataCore to take it over…that is, write 0’s to every disk in each array, claiming its capacity into a pool.  To do this, DataCore was hosted on a set of clustered Windows 2008 R2 servers and positioned between the app servers and the storage infrastructure.  I loaded up each server with DRAM (a lot cheaper and more resilient than Flash SSD) because it buffers data writes from app servers before managing the data onto physical disk hardware, and that has given me a 3-4x bump in I/O performance from all the rigs beneath.

Anyway, the process of implementing DataCore on my existing infrastructure was methodical and user friendly.  Only one storage rig, a Drobo, didn’t like DataCore, since it features its own on-board thin provisioning function that balks when you get past a certain percentage of usage of its overall capacity.  DataCore’s zeroing is interpreted as data being written to the Drobo.

If I had been smart, I would have turned off all RAID on every disk array and allowed DataCore to handle replication.  But, since some customers prefer to keep their rigs RAIDed even when they are virtualized (theoretically providing even more fault protection — an illusion), I kept my RAID settings where they were for each kit.

When I was finished, I had set up a couple of pools that didn’t care a lick whose vendor logo was on the outside of the box.  My performance pool used X-IO ISE arrays, which were already speed demons but whose I/O was doubled by DataCore’s adaptive caching (remember the DRAM).  My capacity pools used a mix of Promise and other vendor FC and iSCSI mounts.

So, everything was virtualized.  Next, I directed output from each of my applications to assigned targets — virtual volumes — created from each pool and turned on services for data protection and management appropriate to the needs of each app.  In most cases, this was as simple as ticking a checkbox.  Thin provisioning, invented by DataCore, was established across all disks in the pool rather than being isolated to specific rigs with thin provisioning software in their controllers.  The list of services in Release 9 keeps getting more robust.

So far, the platform has worked flawlessly with virtual and physical workloads.

Now, why is R9 important, aside from adding a bunch of new services?  I think that this is the unification of two solutions — SANsymphony and SANmelody — that DataCore wanted to release when they came out with V a year and a half ago.  However, all of the work hadn’t been done at release time and some functionality was still missing.  Also, I suspect that DataCore was waiting for customers to tell them if they wanted all of the features from each of the previous products included in V, or if some functions were seldom used or needed and could be dropped.  Turned out, we customers wanted all of the features and more — which is what you get with R9.

I have heard through the grapevine that DataCore attracted boatloads of new customers when V was released, but they were mostly in small to medium firms that were excited to be able to upgrade SANmelody to an “enterprise class” solution without a lot of additional cost and virtually no pain.  By contrast, large firms were slow to buy into V, in the main because their most recent hardware refresh had already occurred and they didn’t want to turn off some of the hardware services they had paid through the nose for with their “enterprise rigs.”  (I keep putting quotes around enterprise because I am not sure that the term has any meaning anymore, other than as shorthand indicating that you will be paying a lot more money for the same commodity disks, trays and boxes when we call it an “enterprise” solution.)

Also, there was some pushback because Data Deduplication was the shiny new thing a couple of years ago and DataCore didn’t list it as one of its services.  (By contrast, Actifio focuses heavily on data reduction as a service of its appliance.)  R9 doesn’t add reduction to its services menu, but I know many large firms that have lost interest in dedupe today and simply won’t see this as a gap in whatever checklist they are using to vet storage technology.

Another thing that made DataCore the sweetheart of the smaller and medium sized firm was how readily it worked in VMware and Hyper-V settings.  Smaller firms drank the Kool-Aide of server virtualization early on as a potential cost-saver (it wasn’t), and the smart ones virtualized their storage infrastructure to make it more flexible in the face of vMotioning workloads.  They didn’t need to reinvent their storage:  DataCore’s virtual volumes were accessible to workloads before and after they transitioned between kits — plus the I/O acceleration provided by SANsymphony helped to some extent to deal with the I/O chokepoint that VMware introduces.

Larger firms seemed to have lagged behind on seeing the natural advantages of virtualized storage in a virtualized server environment.  Or maybe they just hadn’t felt the painful impact of server virtualization on storage infrastructure (especially hardwired fabrics).  Or maybe they believed the woo from their hardware vendor that their latest kit’s support for VAAI (9 nonstandard SCSI commands introduced arbitrarily by VMware) was all you needed to make performance issues go away (they haven’t).

Anyway, I have high hopes that DataCore will continue to gain mindshare in large shops with this SANsymphony-V on steroids release.  They are already in some impressive and large accounts, but most of the growth they have been seeing recently has been in smaller shops…oh, and in “clouds.”

In fact, press materials around DataCore SANsymphony-V R9 and Actifio are both heavy with cloudspeak, which I personally don’t care about one iota.  Private cloud is a metaphor for better infrastructure management, and I think DataCore gets this.  Dare I say that SANsymphony-V does a great job in aggregating storage services so they can be applied judiciously to the data coming from specific apps.  So does Actifio with the appliance they showed me today, albeit with somewhat different services. So, I guess I’m good with private cloud woo.  I just wish they would call it something else:  instead of a cloud strategy, why not call it a “fog strategy?” (Fog is a cloud when it comes down to the earth, get it?)

As for public clouds, these are just external outsourcing services — like ASPs/SSPs in the 90s or Service Bureau Computing vendors in the 80s.  If you want to use them, fine.  Do so at your own risk.  But remember Networking 101 — all of those inconvenient truths about distance induced latency and shared WAN facility jitter will still come back to bite you in the tuccus.  Also, remember cloud provider SLAs are like the lane lines painted on roads in the city of Rome — they serve as a suggestion at best.

If you do want cloud storage to augment your infrastructure, both Actifio and DataCore offer some solutions.  DataCore has some alliances they have been talking about for awhile with onramp product providers like Twin Strata that, I suppose, might be useful to connecting to an external storage cloud.  

Certainly, DataCore would be de rigeur if I were building a cloud storage business myself.  In fact, if both the external storage service providers you want to use and your local infrastructure are running SANsymphony-V today, one other the larger problems of cloud storage — the incompatibilities of different service providers in a standards free world of cloud computing — would be effectively nipped in the bud.  In a sense, SANsymphony-V could provide the Rosetta Stone that enables data interchange and storage integration from multiple sources.

Anyway, that’s how I see these products.  I am awaiting more information from Actifio regarding their object storage model, and frankly I would like to see both products deliver a greater integration with tape — particularly TapeNAS.

One last thing:  data management means more than capacity management.  While Actifio seems to use rhetoric suggesting that they are chasing the data management dragon — mainly by consolidating copy data and exposing it to data reduction techniques — this is not data management.  Dedupe is a capacity allocation efficiency play, not a capacity utilitization efficiency play.  I think that DataCore is moving more toward adding true data management services to its virtual controller, at least insofar as it supports seamless tiering between performance and capacity pools.

That’s it for now.

{ 5 comments… read them below or add one }

Pq65 June 27, 2012 at 9:41 am

Jon,

The title says “…A tale of two solutions” but the post ended up becoming a DataCore Ad.

Com’ on, some of us know you like DC but at least try to compare and contrast. Make it interesting for the rest of us that keep reading the blog.

That said, Actifio, to me is an extension of IBM, not only from a mgmt viewpoint but also from a technological one, given that they leverage IBM HW, leverage the SVC support matrix, and license some IBM IP which is used with Actifio IP.

My first reaction when i saw PAS was why is this something “new” and why can’t it deliver things that the DataCore’s and the FalconStor’s of the world can’t? What separates Actifio and these guys is a very blurry line…

Administrator June 27, 2012 at 11:00 am

Thanks for the feedback. I agree with just about everything you said. Didn’t mean to put on a DC commercial, but I was covering two announcements in one post. More info is needed on the Actifio play, which I am scheduling. As noted, the evangelist didn’t offer much technical detail. By contrast, the DataCore announcement was important to readers and given my intimate familiarity with their wares as a user, I figured I would spell out the context for their announcement — otherwise, it is just another upgrade story. I will try to do better in the next post.

avandewerdt June 28, 2012 at 7:12 pm

Hi Jon.

I work for Actifio out in Australia. It would be great to see a follow up article after you get a more detailed technical briefing from one of our many tech-heads. Our product has a unique combination of capabilities and it is that combination (that bringing together of point tools) which sets us apart from our many competitors (including IBM).

I would also love to see a clear list of what new features are in this latest release of DC and how that compares to other Storage Virtualization platforms like IBM SVC, EMCs new capabilities and HDS USP-V.

Mox55 December 13, 2012 at 5:43 pm

All you need to know about Actifio’s architecture, is explained in the
13.4MB IBM SVC Redbook implementation guide- what a frankin freak! Why would I add little appliances into my ENTERPRISE to move data? Seriously, go look at some of the diagrams in that Redbook. Plus Extents? Extents? Plus FC based. Its a hairball. Support will be a nightmare. Its primarily IBM hardware, software, with an APPLE derived interface. I agree its a large market space, but not for such an old design. Nice trojan horse attempt IBM!

pipik1199 January 10, 2013 at 2:36 pm

Where is the follow-up post(s) on this? I am very curious to find out more about Actifio dependence (and IP legacy) from IBM

Previous post:

Next post: