https://slotsdad.com/ - casino online slots

3PAR: Innovative or Par for the Course?

by Administrator on November 4, 2006

David Scott and I sat down at SNW to review 3PAR’s product pitch. Apparently, I had dissed the poor company in an article or two I had written somewhere — comparing it to an EMC mini-me play (software joined at the hip to a controller that made commodity hardware more expensive), but someone smart told me I needed to give it another look. So, I scheduled the meet.

Interesting dynamic that: sitting across the table from a guy who would probably prefer to clean my clock than talk shop, while trying to contextualize my questions in a way that won’t have either of us reaching for our .45’s. Seemed like David was correcting my assertions at every turn.

3PAR is standards compliant, he noted, having embraced SMI-S — which I eschew. We are the standards advocates, he noted, not you!

Okay. If you accept SMI as a standard, which I don’t, and if it the provider is implemented to afford pure visibility into the goings on inside the 3PAR array, which I am not sure it does, you might be able to give 3PAR the nod on standards compliance.

3PAR is a box. It has disks and software. The software seems to be doing some remapping of disks, which are provisioned as 256 MB chunklets/16Kb blocks. Scott said that this approach yields greater “utilization efficiency” (I believe he means capacity allocation efficiency since utilization efficiency goes to whether the platters are being filled with data that has some sort of value and therefore belongs on the expensive disk, but let’s not quibble) : up from the pawltry 8 to 15% in most disk arrays to 40 to 80% in a 3PAR environment, according to Scott’s argument. So, that explains their assertion: “You need a third less capacity to meet the same storage needs.”

Apparently the product sells less on any sort of performance or scalability case, but instead on the value proposition of reduced complexity. Since so much functionality is being done inside the box, you need very few storage admins.

MySpace, he noted, uses it apparently and has ZERO admins for a Petabyte of storage. I recall MySpace getting booed on stage a couple of SNW’s back because of its lacksidasical attitude regarding its responsibility for storing user data reliably. Their spokesperson seemed to be saying, “Oh well, it’s just a bunch of crap that all those teens with their angst are posting there, so why spend a lot of time or money keeping the data safe…” They probably wouldn’t employ a lot of admins in any case.

Reference case two was more mainstream. A Chicago firm is using about 400TBs of 3PAR and has two admins who spend about an eighth of their time on storage administration tasks. Scott concludes that, with 3PAR technology, each storage admin can manage about 2 PB.

Other notes from the meeting:

  • 40 disks per 4u shelf @ 500GB per disk = 20TB raw, which behaves like 60TB because of the thin provisioning technique applied by 3PAR.
  • A new paradigm that moves us away from the traditional conceptualization of “storage arrays as error-handling machines.”
  • All disk must be gotten from 3PAR with whatever write-up they make to commodity disk pricing. But, nobody cares. Simplicity is more important than cost in the market that Scott serves.

We agreed to follow up with a series of questions a la the NetApp questionnaire a few posts back. I would be happy to get input from folks who have some questions they would like to have answered.

Anyone? Anyone?

{ 6 comments… read them below or add one }

Richard November 5, 2006 at 4:53 am

Jon,

It is a reasonably ‘innovative’ hardware architecture in that its a full-mesh interconnect, with up to eight controller nodes and each controller supports its own cache.

To maintain system-wide cache coherency, I expect that each node needs to replicate its write to the remaining seven caches, one per controller node.

Perhaps someone can state the performance for a fully coherent 64 port system, say… 8 FC hosts and one raid5 backend disk loop per controller.. ?

zax November 5, 2006 at 5:03 pm

I talked with 3par a while back. From memory….I remember the following points:

3Par does have significantly better density then some of its competitors. Because of the sled configuration, 3Par MUST replace all drives. This was a turnoff for me, because we like to perform basic maintenance on our storage.

I seem to remember a backplane somewhere that was a single point of failure. Similar to the EMC argument for the Clariion, this backplane is a passive circuit that never fails ( not true).

Overall 3Par’s offerings appear to be powerful as pureplay only. 3Par has built in switches and available SRM software that only work with 3par storage. These features do not map well to environments that already have EMC, HP, HDS….etc installed.

David@3PAR November 6, 2006 at 4:43 pm

Enjoyed the chance to spar with Jon and we will happily follow up with answers to his forthcoming questionaire. One clarification to Jon’s comments:

Capacity allocation efficiency is not the correct term for what we do differently. Most traditional vendors point to the use of SANs and, more recently, SAN Virtualization products as increasing capacity allocation efficiency — and we do it too.

However, the huge elephant sitting in the middle of the room is that most capacity that is allocated (with that “high” efficiency) has a depressingly small amount of real data written to it — especially in database environments — i.e it has poor “data” utilization efficiency. Actual written data divided by total physical capacity can be as low 8% even in very sophisticated data centers. In these same data centers capacity allocation efficiency (allocated capacity divided by physical capacity) can look very high – up to 70%+. Using 3PAR Thin Provisioning, however, the “data” utilization efficiency can be driven up to 80%+.

A pleasant side effect of 3PAR Thin Provisioning is that capacity allocation efficiency can actually be greater than 100%. When we last surveyed our installed base we could assess that they were allocating 30PB out of systems that only contained 12 PB of physical capacity. The other 18PB represents the extra capacity (and systems) that our customers would have had to purchase if they had bought from traditional monolithic or modular array vendors. And they saved elecricity costs as well from having has to purchase fewer spinning disks to meet the same business need.

I’ll also take the opportunity to clarify a few of the additional comments made by “zax”:

We don’t replace all disk drives in a magazine (sled) when a single drive fails, just the failed drive. The system takes care of safely logging all IOs as the 4-drive magazine is pulled out to replace the single drive. This occurs in just a couple of minutes.

The backplane is entirely passive and we have never had a single backplane fail in our customer base in the entire life of the product. Can’t comment on the reliability of Clariion “passive” backplanes

We don’t have any switches inside the system. The cache coherency for the n-way active controllers occurs over our own low latency, full mesh backplane.

The vast majority all our 3PAR Utility Storage platforms have gone into environments where we are coexisiting side by side with EMC, HP and HDS installed systems. Most customers are relieved that our extremely simple manageability prevents the need for administrators to have to attend the multi-week training courses necessary to keep up to date with those complex aforementioned traditional environments.

Dimitris Krekoukias November 8, 2006 at 2:37 pm

Pillar has a similar model (though not as dense). They tout ease-of-use as the most important thing, plus some weak QoS quasi-guarantees.

If you want true density look at MAID from Copan, though I believe the way all ultra-dense solutions deal with drive replacements are retarded. I’m not gonna remove 8 disks to replace 1…

Anyone that’s ever administered EMC or NetApp boxes and has enough neurons will agree that they’re not all that hard to manage.

I do like QoS quarantees a lot, and I think they are one of the most important features missing from today’s storage – a certain 3-letter vendor (pedants might say 4) will be providing that for their midrange line soon, and in great fashion.

Thin provisioning also has a place but I’d rather have a choice of how my data is laid out, especially for very intensive loads. The way it’s implemented now it’s a bit non-deterministic and needs tons of disks in order to perform.

My $.02

D

zax November 9, 2006 at 9:53 pm

David,

Thanks for the clarification. My point on drive replacement, may not have been written clearly. I was trying to point out that you do not allow customers to perform their own drive replacements (at least that’s what I was told by the sales team).

I am glad that you have never lost a backplane. If you were to lose one, what would happen to the system? Is the backplane a single point of failure?

Gerry Bragg May 2, 2007 at 9:41 pm

A very late follow up to this thread. I looked at just about anyting in the mid-market place and ended up with a 3PAR e200. Very, very easy to administer. It is nice being out of the business of managing disks. They are lightyears ahead of EMC’s midrange products in this regard due to the storage virtualization. It is also lightyears faster from an I/O standpoint than what we get out of your EMC CX400 with 15k FC disks. (our e200 is outfitted with 10k FATA drives, our choice). This is so simply due to the fact that the I/O is spread evenly across all of the disks in the array. There is not hot spot.

Many good new products out there in the SAN marketplace.

Previous post:

Next post: