https://slotsdad.com/ - casino online slots

Testing Foibles

by Administrator on January 10, 2006

Fair and open performance testing is a big goal of mine. We have posted about it here several times and it was the subject of my recent article on Byte & Switch. Here is an interesting note just received from a reader. Thought the story was worth sharing, though I have excluded the identity of the sender for his own sake…

Hi,

I just read your article “Wanted: Trustworthy Test Data” and want to offer my experiences with my recently completed bake-off. My client is a very well known company (hint: an analyst just made an exaggerated claim about its shares going to $600) and the evaluation focused on Microsoft Exchange. The vendors involved were EMC, HP, and NetApp using Clariion CX700, EVA8000, and FAS3050c respectively .

All of the vendors were given the same requirements and a copy of the test plan for review. The test plan was finalized only after receiving feedback and additional input from all 3 vendors.

The first challenge was deciding on the granularity of the requirements. My team and I determined to keep the requirements somewhat high-level since the underlying architectures for all 3 varied slightly (e.g. specifying specific RAID group layouts would not have been relevant to the EVA’s “virtualization”). Also, due to the high profile of my client, we were supposedly given their top resources and support. So, it was assumed they would be able to tune their systems optimally for all of our tests. What followed was a mess.

None of the vendors recorded and presented the test results properly. The Perfmon counters were incomplete, logs were missing, etc. All 3 vendors ran out of time and could not complete the full suite of tests. Test requirements were interpreted differently even though they were discussed before final submission. Adequate hardware and related testing equipment was not always provided (e.g. a latency simulator for the DR tests). Throughout the testing, my team was inundated with marketure and irrelevant competitive analysis statements/slides. False accusations of Vendor A changed the configuration just for that one test, Vendor B provided an excessive amount of hardware, Vendor C is not Microsoft compliant, etc., etc., etc., blah, blah, blah. I wrote down several of their claims and statements before and during the test and realized all 3 of them eventually said something contradictory later on.

So, who did I pick from a technical perspective? HP. They were the most straightforward and their hardware/software won out on several (not all) of the tests. Did it matter? Absolutely not.

My client never nailed down the requirements even after repeated requests to do so. More importantly, the mentality of the IT staff would always favor the incumbent (NetApp). The final justification for the “winner”? They already had spare parts and knew how to use it. So, 10s of thousands of consulting dollars wasted on a bake-off that did not really matter in the end.

Anyway, I was going to publish the results using generic vendor names in a case-study for my company. However, paranoia took over and I decided against publishing ANY numbers even though they would not be directly correlated to the vendors. The vendors know the numbers recorded and subsequently would know which numbers refer to who. BTW, you mentioned that there are clauses that publishing performance numbers would violate the software license and/or warranty. Do you have anything that you can cut and paste and send to me concerning this? I find all of that to be quite amusing.

We are not amused by gag orders on publication of product test data — especially in a case like this one where the vendors perform the tests themselves…

For info on gag orders, just keep an eye on this blog. Some gag order clauses have already been cut and pasted here. And next time, give us your numbers: we’ll run them.

Previous post:

Next post: