https://slotsdad.com/ - casino online slots

CA ARCserve 12.5

by Administrator on July 13, 2009

Shortly, CA will begin disseminating a report we completed in our labs last week after testing ARCserve 12.5.  One thing I liked about this release was its integrated de-dupe functionality.  Here are our performance results on the tests, for anyone who is thinking about de-duplicating backups.

ca-dedupe-performance

The data extrapolates linearly. 

  • Regular Backup of 1TB x 4 full backups per month = 4TB of required space
  • Backup of 1TB x 4 dedup’d backups per month (assume 5% compression) = 1.15TB of required space.  (1st full = 1TB, then 3 subsequent fulls at 5% = .15 TB)

Why is EMC spending billions on Data Domain to do this stuff in an appliance if we get the functionality for free as part of ARCserve (and in the future products from Symantec, VMware, IBM Tivoli, and eieio)?

{ 4 comments… read them below or add one }

nitroxer July 14, 2009 at 1:34 am

It would be nice to see a better comparison between hardware (DDUP, QTM, etc.) v. software (Symantec, CA, etc.) Haven’t seen anything that gives me a reasonable basis for comparing cost,aggregate, and stream performance absolute – or on a cost-adjusted basis.

Your thoughts if EMC repositioned DDUP as a VMware virtual machine offering?

Perhaps offering integration with their cloud backup offerings. Would seem likely to steal some TCO thunder from CA and Symantec.

josephmartins July 15, 2009 at 2:52 am

Either the math is wrong or ARCserve failed to de-dupe the initial data.

The first full backup of 1TB using any decent de-dupe should not consume 1TB. Of course it really depends on the data composition in the 1TB seed data set. Still, I would expect to see some de-duplication as opposed to none at all.

For example, backup 1TB x 4 dedup’d backups per month (assume, say, 50% initial and 5% subsequent compression) = 0.65TB of required space. (1st full = 0.50TB, then 3 subsequent fulls at 5% = .15 TB).

You might want to check with the folks at ARCserve. Something is not right.

Administrator July 15, 2009 at 7:40 pm

Point taken, Joseph. My extrapolation is wrong and the result should be a good improvement. We saw a fractional reduction in our 30GB test, but not 50%. Still, this was not factored into the extrapolation. Thanks for catching that.

Administrator July 15, 2009 at 7:42 pm

Nitroxer, I am not following your questions. Would a DD workload work well in VMware? I don’t know. Would that solve the management and parsing problems referenced by users of DD silos? I don’t know. Why would you want to position DD as a cloud service offering? I really don’t know.

Previous post:

Next post: