https://slotsdad.com/ - casino online slots

Anger Management

by Administrator on October 27, 2008

I have been holding back on writing here for a week or so because nearly everything I read makes me angry these days.  And I didn’t want to take it out on those who landed here.

I won’t talk about politics or the economy, both of which have me shaking my head in amazement at the propensities of mankind for greed, doubletalk and flat out stupidity, but I will focus on some storage and tech foo that I have been reading of late. 

For example, on Friday I received an email blast from SearchStorage promising me a resource for download that would “Balance the Budget and Save the Environment.”  A bit further down it promised me the latest word on Green IT. 

I fell for it.  I opened the email to see a couple of links to a podcast by Dot Hill’s Tom Sheffield observing how his company’s latest wares green infrastructure by tiering storage inside the box.  Oh, and they use energy efficient components, too.  Okay.  And so…

This podcast added to the litany of vendor pitches I heard in the exhibitor hall in Chicago last Thursday at Data Center Decisions.  Vendor booths at the show were about evenly split between virtual data center management software/services and rack cooling wares. 

Interestingly, the former want to sell me cool new software to manage the virtualized server infrastructure.  I found myself asking the questions, Wasn’t the hypervisor supposed to drive complexity out of infrastructure by providing the means to consolidate servers?  Why is third party software required to manage the hypervisors?

Chatting with a fellow speaker at the show, he said that a show of hands during his session (on Server Virtualization) suggested that as many as 93% were deploying virtual server technology in their data centers.  Apparently, they didn’t get the message from the vendors in the display area that, offsetting the supposed cost benefits of consolidating file servers and low traffic web servers into VMs, was a huge subsequent investment in third party software that would be required to manage the Jenga towers and to deal with illegal application calls, hardware glitches or other issues when they take down the entire stack.

Guys, aren’t their simpler means for consolidation that don’t require a hypervisor?  I mean, can’t file servers be consolidated using a global namespace.  Heck, when I consolidate file servers, I just create new folders for the files that I am absorbing from the decommissioned unit.  I can share this directory out to whoever needs access to it.  Easy-peasy.

That’s assuming, by the way, that you actually need all of those files on line anyway.  Remember the University of California finding at USENIX last August.  Looking at 10s of thousands of NetApp Filers, they found that 80-90 percent of the data stored on them had not been accessed in over a year.  Virtualize to consolidate this data, my ass.  Can you say tape?

As for web server consolidation, I really like Plesk, which we use to run dozens of low traffic websites on common hardware and without virtualization.  As I see it, common management is what you need to consolidate HTML and PHP stuff on a fewer number of boxes.

As I sat in the exhibition hall for my “Ask the Experts” session (how, some of you may wonder, is Toigo cast as an expert?  Search me!) I took time to read two more annoying documents, Virtualization for Dummies and Availability for Dummies, which were being disseminated by Stratus.  I assumed that these were good for me to read since I don’t get the entire V phenom.

V for D opens with a comparison between Paris Hilton and virtualization.  Her celebrity will fade, the author argues, while V is here to stay.  The definition of V offered by the author, “The use of an additional software layer that enables multiple operating systems and software applications to interact with a piece of hardware as though each had complete control of the hardware.”  Hmm.

To the author’s credit, he does allocate half of a page in the booklet to tipping a hat to IBM for inventing virtualization as early as the 1960s.  He goes on to say the V is driven by:

  1. The Data Explosion:  more data, more servers
  2. Moore’s Law:  server capacities have grown, but they are underutilized — V will increase utilization of resource
  3. Energy costs:  Huge growth in power requirements because of lots of underutilized servers
  4. Disaster recovery:  We suddenly recognize the need to availability and recovery and only V can do the job of failing over with application and data states consistent

My responses:

  1. The data explosion is a function of data mismanagement and has nothing whatsoever to do with the presence or absence of virtualization.
  2. Moore’s law gives us more horsepower.  That assets are underutilized suggests that we aren’t hosting applications in a way that leverages this horsepower.  Stacking apps does not require stacking multiple OSes in the same box (see my previous note about File Systems/Global Namespace and Plesk for web stuff).  The author does not mention that comparatively few shops are hosting any sort of compute or resource intensive app in a VM, by the way.
  3. Energy costs:  Servers are no longer the main culprits.  Storage is.  Storage is consuming electricity and generating heat because we are storing more bits on line all the time and doing little or nothing to segregate the bits that are actually being accessed from those that aren’t, which could be shipped to green media like tape or optical, or deleted altogether.
  4. Disaster recovery:  This one is a real reach.  I do not believe that x86 V does anything to improve the resiliency of always on apps.  While CA XOsoft provides examples of how “wrapper” software (call it geoclustering, etc.) can enable failover to unlike hardware hosting platforms (including VMs) this has nothing intrinsically speaking to do with the resiliency afforded by V itself.  In fact, V increases the scope and pain of a disaster when one occurs.  The second tome from Stratus, Availability for Dummies, makes this case by using the metaphor of eggs in a basket.  In an x86 V server, you lose all VMs when the stack fails, not just one app, but all of the apps that are stacked up on the hardware.  And, as previously mentioned, it doesn’t take a failure in the hypervisor to make the disaster happen.  An illegal and un-intercepted resource call by an application will do the trick.  As will any number of issues in your favorite thin provisioning storage system.

V for Dummies goes on to say that you shouldn’t think that V solves all problems.  True enough.  It says that smart folk “don’t forget their data” which needs to be replicated properly.  No duh.  Oh, and don’t forget to buy lots of third party virtualization management software to contain VM sprawl.  They conclude with the recommendation that you don’t forget your end-of-virtualization-project party!  Celebrating what exactly? That you have now spent hard to come by budget in a way that might well have placed more of your precious applications at risk?

Suicide, they say, is painless.

Onto the other things that have me in a twist today.

Take a read of this column by Ephram Schwartz in InfoWorld, which asks the question “Does IT have a strategic stake in business” or this “slide show” in Network World that declares the data center dead because Nick Carr says it is.

Of course, IT is strategic.  Business doesn’t get done without it.  There must be a place at the table for the top IT dog to help define what is within the realm of the possible and to provide a vision of where IT needs to go to better align itself with the business.  It was rich that Schwartz picked GM as a focus of part of his discussion.  Not only is the company betting the ranch on cars that are best discriminated by their MP3 players from competitor vehicles, they hopelessly mismanage data as reflected in plans to grow one of their 380 TB SANs — that contains today only 30 TB of useful data — because of space constraints.

As for the demise of data centers and their replacement by clouds and SaaS, I remember the same prognosis being issued by IDC in the late 1990s, but using the moniker ASP.  Didn’t happen then, so what has changed today?  Every cloud service provider I talk to wants information about how to get consumers past their concerns over hosting precious data at a third party facility.  They cannot survive, business-wise, if they have to build one-off infrastructure for every customer.  The business model requires economies of scale from sharing infrastructure.

I would ask why these kinds of articles continue to take up my email resources and time, but I already know the answer.  It’s the same reason why, out of three hundred million Americans, the best candidates we can come up with to lead this country through its worst economic season since the Great Depression are the four we have competing now.  Did I say that?

{ 5 comments… read them below or add one }

psteege October 29, 2008 at 12:33 pm

On the Clouds/Saas topic, note i365’s addition of a local appliance to their cloud offering. Pragmatic realization that you’ve got to give the customer a sense of control of their destiny. I posted on this today: http://tinyurl.com/48t

RC October 29, 2008 at 3:54 pm

The processor arms race is driven by the CPU companies marketing, not any specific need for more horsepower.

Running so many instances of an OS on one box is only necessary because the “worse is better” OS vendor can’t get two different applications to run on the same box.

Storage networks grow because one vendor wants to “own the datacenter” and capture as much high-profit margin business as they can.

The stock market really only exists to serve itself.

When you wake up in the middle of the night laughing for a reason you can’t quite make sense of, remember that all this stupid sounding stuff is some other person’s business plan.

LeRoy Budnik November 1, 2008 at 12:55 am

I agree – bigger drives do not green make

I want to see real green, not “natural evolution green”. Real green is the result of an engineering change beyond the normal curve. For disk drives, the normal curve is smaller, faster, etc. In the case of a hard disk, real green could be:

– better heat sink, reducing cooling cost without making MTBF shorter
– better components in drive that can allow run at higher temperature ($1 more per drive), again reducing cooling cost without making MTBF shorter

I agree – hypervisors increase complexity, but not completely

However, they address the problem that the structure of the O/S does not lend itself to applications playing well together. Consider the registry. Put two apps on the same box, and you have double the trouble. Use a virtual instance, and each remains independent, taking away a complexity that most could not manage otherwise – the registry is a nightmare. So a VM is a different complexity, new, maybe worse, maybe the same – the jury needs better evidence. Hyper-V looks at it a bit better, more of a “view” approach where each instance only need adjust it’s view of the base (and might not need to adjust.)

Maybe, we need to go to the root cause – the O/S and
– control interaction between applications
– isolate parameters, independent of the registry

Of course, this is the way we did things in ____.

Disagree – the cloud is a goal, although you still need a place to put the stuff. Data centers breath, if parts of them could sleep and workloads, seamlessly, juggle, then there would be great energy savings potential, in addition to an ability to create a standard, commodity infrastructure that could be turned to any purpose. But to get there, you need to understand the service level requirements of data, CPU, etc. There are emerging ways to record these requirements, but nothing standard. You also need a non-centralized management strategy. Yet, without these tools, many shops are standardizing (a though precursor).

Election – Vote for Phil (one of the guys I work with)

Slogan: Common Sense, Practical and Cheap

In addition, would find a way to incorporate a go-kart track into the White House rose garden, increasing revenue by selling seats to watch the President race, and by using ATF to accept bets on who would win.

The main thing we need is common sense.

Jered November 19, 2008 at 10:13 am

LeRoy,

I agree with your general point — GE does the same thing with their “ecomagination” marketing effort, and claims the 3% evolutionary improvement in locomotive design as a specifically green initiative. That’s a bit disingenuous.

On your specifics, however:
– better heat sink, reducing cooling cost without making MTBF shorter
– better components in drive that can allow run at higher temperature ($1 more per drive), again reducing cooling cost without making MTBF shorter

Neither of these reduces cooling cost, in terms of BTUs that eventually have to be removed from the data center. (They may reduce costs slightly in terms of needing fewer fans within a single chassis.) A better heat sink will move heat from the drive to the environment without the need for a fan, but the heat still needs to be removed from the environment. Similarly, drives that can run at a higher temperature only raise the point at which you have to remove the heat, but the heat still has to go.

The only thing that can be done to make drives greener is to have them consume less energy. If a drive consumes 8W, all that energy turns into heat that has to come out of the data center. (OK, some of it turns into noise or motion, which eventually turns into heat.) The only thing that can be done to reduce heat (and thus cooling) is to have the drive draw fewer watts.

With mechanical drives, this will always only be evolutionary improvements. It turns out, capacity increase has been the biggest win here! It’s been much easier to double the capacity of a single drive, thus halving the power consumption per GB, than halving the power consumption of a drive at the same capacity point. This will likely continue to be the case for a few years more.

If and when we run out of technology for shrinking our magnetic domains, the next frontier will be new storage technologies. Solid state disks have the opportunity for great additional savings due to near-zero idle power draw, and there are many likely paths forward for reducing their active power consumption. Spinning disk is slowly reaching the end of its road, as 15K RPM disks will in the next two years, but there are many other roads that lead on.

In the mean time, if you’re unhappy with the rate at which disk power consumption is falling your other option is to reduce the amount of spinning disk needed in the first place. That’s why we see deduplication as an inherently green technology, reducing the amount of disk necessary. This, in addition to an architecture that can properly utilize multi-terabyte drives, allows us to provide low operational costs without the reliability risks associated with things like drive spindown.

Regards,
Jered Floyd
CTO, Permabit Technology Corp.

LeRoy Budnik January 19, 2009 at 11:12 pm

Jered,

Check out the commentary from the Rocky Mountain Institute. Many industry types participated in the research. Changing the heat capabilities, using better grade components adds a small amount to the cost and enables other techniques. Some are very traditional. Let’s say, we move to DC, open racks and convection. We won’t need fans, or the same number of fans. Fans are a big contributor to power consumption, infact, they add heat, their power supplies add heat, etc.

Your comments are well taken, however, I disagree – we will have to talk at Symposium, to bring you to the right rather than the party.

Previous post:

Next post: