I agree with some of your recent comments regarding the efficiencies of mainframes and the mess that open systems has become in trying to mimic that functionality. My question to you is how viable would it be for a traditional open systems VAR to provide mainframes for it's customers and how receptive would IBM be.
From IBM's perspective, 20 years ago they had a few thousand targets to sell to and it was clearly a direct business. But now I would assume the targets are many times that and a channel is the only way that IBM can effectively grow the mainframe market share.
From my view a traditional open systems VAR understands the complexities and weaknesses of current solutions, has the relationships in the medium and large businesses, and has unique credibility in pushing a customer to mainframe.
I appreciate even the briefest of answers.Me, brief? No way. First, Spencer, thanks for writing in with the question, though you can always post it in the comment section of the blog. On your first point, how, realistically, can open systems integrators and VARs participate in mainframe sales and services, and how receptive would IBM be? My answer is two-fold. First, I haven't a clue what IBM's channel operations are like when it comes to mainframes. That's a question for Tony Pearson. I suspect that they will take money wherever they can get it and if a VAR wants to participate in a mainframe sale, they would be all in favor of it. But that's just me. I will bleg Tony to give some input either here or on his blog at IBM. Second, I want to make sure to offset a misimpression that I might be communicating. There is nothing about mainframes that is magical. A lot of folks seem to think we are talking about a big honkin' box whose inner workings are known only to a few privileged practitioners. Not so. The z-Series mainframe runs a Linux variant OS and, while architecturally more complex than your typical distributed systems server OS, occupies about as much space as a rack of servers. It does virtualization very well using logical partitioning that is supported on box in hardware and software. Frankly, and IBM doesn't much like this characterization, I see it as kind of a super server, but without the foibles of virtualized environments on x86 extent code. I also want to assert that just as there is no one size fits all storage box, there is also no one size fits all server box. Mainframes do some things better than x86 platforms. By better, I mean more efficiently both in terms of economics and performance and resource utilization. X86 architectures do other things acceptably well, provided you manage them in a disciplined way. They did throw out the baby (data and resource management tools) with the bathwater (high cost of silo mainframes) when consumers moved to "open systems" (that name is a bit of tongue in cheek humor, since it orginally equated openness to not-IBM). There is no reason why similar facilities could not be added to x86 environments, provided this occurs in a standards-based manner. That is why I prefer W3C web services standards to SNIA stuff on the storage side. The question is whether it makes sense to use a VMware to try to do resource efficiency. VMware is a proprietary bit of code for which there are many competitors. I believe it has no sustainable differentiator and will ultimately be usurped by something else. Maybe a defacto standard like Hyper-V (defacto because Microsoft OS'es are on 80-odd percent of all servers) or something like Zen. What makes all of these solutions flimsy, however, is that they mask the underlying disconnects in infrastructure raised by the proprietariness of storage systems vendors on the one hand, and on the other, they cannot intercept every conceivable "illegal resource call" that an application riding in a VM might make to the underlying OS of the server -- something that causes the Jenga Tower to topple almost every time. I think we need to set our sights a bit lower at first. Let's fix the problems of heterogeneous infrastructure management with good server, network and storage resource management. Let's do our virtualization at the protocol level, not at the software layer. Whatever you may think of them, Zetera's UDP interconnect has shown one way to create virtual storage volumes just by using IP multicasting and subscriber groups. No need for Invista or SAN Volume Controller: disk aggregation can be done simply and effectively at the protocol layer. What I like about Xiotech is that their Lego building block of storage, strangely reminiscent as it is of Digital's old datapac designs, is completely W3C Web Services managed. This simplifies the administration of each brick, its configuration into clusters, the aggregation and allocation of its resources, and so forth. A W3C standard hook is also outward looking: applications, most of which are already sporting W3C standard requestors, are asking for a resource; Xiotech's brick is announcing (or may shortly) "I'm here. I'm storage. I have this much capacity, these speeds and feeds, and this kind of waiting line (queue). You can use me to store your data if you want to." Imagine if all infrastructure components were enabled with W3C hooks. We could manage everything more efficiently and we could establish routing patterns through a network that would expose application and user file output to policy-based provisioning and services automatically. Gee, that kind of sounds like what a SAN was supposed to do before the greedy bastards in the Fibre Channel Industry Associations joined it at the hip to that brain dead protocol to sell us overpriced crap. That, I think, is the mainframe writ large. That is what we need to get to. I am not saying rip and replace all distributed solutions with mainframes. I am saying that the disciplined management of a service based infrastructure is the only thing that delivers real return on investment to the consumer. As the honest broker between the vendors and the consumers, it is time for the integrator to shine.