Low Price, Technically Acceptable

by Administrator on March 8, 2013

Got an email from an old editor of mine, Nick Wakefield of Washington Technology, inviting me to a webcast on Low Price Technically Acceptable Contracts, presumably aimed at Fed contractors.  Here was the invite, which I post as a courtesy to Nick…

Surviving Lowest Price Technically Acceptable IT Projects:
Maximize your Returns and Customer Satisfaction Ratings

Dear Jon,

Register Now for this exclusive roundtable webcast with Nick Wakeman, Editor of Washington Technology, to learn how you can maximize your customer satisfaction ratings, as well as returns on your Lowest Price Technically Acceptable IT projects.

Webcast: Surviving LPTA IT Projects
When: Monday, March 25th at 2pm EST
Location: Your Desktop
Cost: FREE

Join us to learn:

  • Recommendations to maximize customer satisfaction ratings
  • The importance of selecting the right tools to solve government technology challenges, while working within tight budget constraints
  • New approaches for putting together competitive, winning IT solutions for Cybersecurity, Continuous Monitoring and the Cloud
  • Strategies to maximize your returns on Lowest Price Technically Acceptable IT projects

Now, I am not saying that this would be of interest to everyone.  Heck, I may not even have time to attend it myself.  But there was just something about the title that made me think.  If there were a meme to be derived from the current state of affairs in IT in just about every company I visit it is the tendency for business decision-makers to prefer the least expensive approach that still provides what seems to be a technically acceptable outcome.

You could probably argue that it has always been this way, that no one wants to spend more money than they need to on a service or technology in IT.  Yet, as quizzical as it sounds, that is the opposite of what we are doing all the time.

In storage, where is the sensibility to cost in the purchasing that we do?  I am talking strategic costs, not tactical savings.

It wasn’t bad enough that we were paying name brand prices for kit that have exactly the same components as we could buy from no name vendors.  Now, we are repeating this tactical cost-cutting silliness with public “storage clouds” — which appear to drive down labor costs via outsourcing, but impact significantly the availability of data based on a issues with WANs and hosting services and may drive costs much higher for storage over time.

Another example:  We keep throwing storage IO boosters at VMware workload — PCIe Flash cards or Flash SSD or All Flash Arrays to expedite throughput.  But the choke point in the server is not a hardware choke point, it is VMware — something that no one seems to want to acknowledge — and we are just throwing more hardware at the problem, like a brute force attack, in an effort to cope with a symptom (low performance).

We spend all kinds of money to fix problems created originally by a “low price, technically acceptable” technology project (server virtualization a la VMware).

Look, I realize that VMware has its fanboys.  I imagine that there were a lot of shops where servers were inefficiently utilized at the resource level.  That was entirely the fault of server administrators and application developers, IMHO.  I say that with confidence because my servers were highly utilized, whether you are talking about a zSeries mainframe or lots of no name Linux boxes; Windows servers were a little less efficient, mainly because we liked to keep spare capacity available for irregular or unpredictable workload shifts.

I think one part of the problem I am beginning to see in my old age is the lack of elegance in programs I see today…in the coding sense.  When I first started my career, in mainframe data centers, memory was a scarce resource and adding disk space required a new building.  These parameters forced us to be stingy about using machine resources.

When I got my first PC, an Osborne 1, then later an IBM PC running Windows X.X, processors were slow, memory was expensive and hard disks were non-existent.  Again, the situation forced elegance in how we coded programs and used resources.

Today, with all storage capacities on all media growing ever larger, with processors dwarfing the speeds of early x86 chips, with all of the other advances that have made kit bigger, faster and presumably cheaper, all we really seem to have done is discourage anyone from being elegant with their coding and to treat all machine resources like they are limitless.  That is hardly the way to encourage resource utilization efficiency.

I don’t hate on poor VMware, but I do take exception with the narrative that surrounds it, that portrays it as the second coming of Christ that has been needed to drive order and discipline back into computing — in short, its cult appeal.

The fact is that server hypervisors don’t fix inefficiency.  Bad programs are still written.  We just stick them in VMs now.  Resources are still mishandled.  We just ignore that the hypervisor is what is creating the log jam and use the opportunity to sell more memory components into the IT shop, desperate to make their VM-hosted apps perform in an even marginally acceptable way.  We continue to instantiate apps on servers, where they consume electricity waiting for someone to use them — though, I suppose that stacking several in one box does marginally reduce the power consumption of a bunch of smaller servers each hosting their own apps.

Bottom line:  I am thinking a lot about what has become of my chosen profession, and how its core effort — Prometheus bringing fire down to man — has been bent and twisted.  Partly it is vendor marketecture that has created the current infrastruggle, but it is also idiotic consumerism — the idea that you somehow look more attractive on date night if you bask in the blue neon glow of an EMC VMAX or if you can brag about how you used 1900 spindles to get to 450K IOPS.

Yesterday, I had an online chat with a fellow from Tegile, a hybrid storage vendor that also has a pretty interesting blog.  The fellow was ex-3PAR and I asked him how receptive folks were for his kit, which uses far less hard disks augmented with Flash SSD to achieve throughput comparable to HP 3PAR with its very large complement of disk.  He said simply that it is two different approaches to solving the same problem and that marketing execution would determine which platform wins.  True enough, but this idea actually irritates me.  Why would you buy and power a huge array of disk if you could reduce the amount of spinning rust pulling power in your facility and derive the same value?

Bottom line for now:  after years of bent computing, we are having our feet held to the fire to go for the lowest price but still technically acceptable solution to every problem.  But this doesn’t seem to be driving elegance and excellence, only short term fixes to longer term problems — tactical measures that deliver short term cost savings but no long term improvements in cost containment, risk reduction or improved productivity.  What’s the sense in that?

My two centavos.

 

Previous post:

Next post: