https://slotsdad.com/ - casino online slots

Are These Really The Key Issues?

by Administrator on June 22, 2009

I just received this email announcing a webcast featuring ESG and Isilon Systems claiming to address the key issues of storage today:

Storage issues are mounting, and so are the critical business and technology decisions you need to make. There are many “issues of the day” that you should be following closely; here are just a few:

  • Adapting storage to virtualized compute resources
  • Creating a scale-out infrastructure to support growing data capacity needs
  • Lowering storage operating expenses
  • Deploying data de-duplication capabilities
  • Reducing costs with a tiered storage approach
  • Archiving to disk versus tape

You need more than raw data: you need insight and perspective, and the opportunity to ask your business-critical questions. The Enterprise Strategy Group (ESG), Isilon Systems, and Ziff-Davis Enterprise have teamed up in this one hour free webinar to provide you the latest market research insights and an open forum to raise your toughest storage questions with Isilon’s scale-out NAS experts.

Come prepared with your toughest storage questions: the answers will be here. Register and attend, and you’ll be eligible to win a Kodak HD video camera.

I am wondering what readers here think about the priority assigned to the issues above.  Are they really key?  My view:

  • Adapting storage to virtualized compute resources:  If you are doing VMware, Hyper-V, etc., this is probably an issue of importance.  I don’t know what, if anything, can be said about it currently other than to deploy Virtual Instruments to get an honest read on I/O so you can route it more effectively.  Of greater importance is understanding what the I/O is so that intelligent decisions can be made about the resources and services that should be availed to it.  That requires an understanding of the business context around the applications that are being used.  Plumbing issues are secondary.
  • Creating a scale-out infrastructure to support growing data capacity needs:  Prioritizing scale-out is appropriate assuming 1) that you have done everything you can to take junk data off spindles and have implemented a green archive (tape or optical) to store non re-referenced data that needs to be retained none the less for business reasons, and 2) that you have done your best to classify data assets so they can be managed effectively over their useful life.  If you haven’t done either of these things, you are doing nothing but playing into storage vendor hands by working to build more capacity to store junk.
  • Lowering storage operating expenses:  These days, cost-containment is job 1.  You lower expenses by eliminating stovepipe arrays and by establishing a common management scheme for storage (perhaps a specific management software package or a web services-based approach) and telling your vendor that you won’t buy their overpriced spindles unless they conform to the management approach you have selected.  That would really help wrangle in infrastructure management costs so that fewer storage admins can do more with fewer hands.  Again, intelligent archive and better data hygeine will help trim the junk drawer so capacity management isn’t a game of putting out daily fires.  In other words, operating costs are a function of common platform management and data management, not throwing more crappy value-add software into an array controller.
  • Deploying de-duplication capabilities:  This is certainly a technology on a lot of peoples’ minds.  I will be doing a webcast on de-dupe AT THE EXACT SAME TIME (see below) as the ESG/Isilon cast.  Mine dwells on the business ramifications of de-dupe, whether the technology constitutes a product or just a feature, whether it is best done in software or in hardware, and whether we need to worry about the impact of de-dupe on things like regulatory or legal acceptability of data.  I suspect that a hardware vendor might simply encourage the application of value add de-dupe technology on his array controller.  What do you think?  Perhaps you can listen to one webcast or the other then listen to the playback of the other cast the next day to see which one addresses your concerns about this issue.
  • Reducing costs with a tiered storage approach:  I am hearing vendors elevate this to an issue all over the place.  On array tiering is being manufactured as a cutting edge new feature of brand X arrays.  I have yet to find a consumer who is really buying into it.  Putting shelves of FC/SAS and LCFC/SATA into the same array is not intelligent tiering — it is just a way to jack up the price of the disk drives.  Migrating data from shelf to shelf based on simplistic watermarking or date last accessed+FIFO algorithms is not intelligent tiering — it is simplistic HSM.  I could use Novell Storage Manager, Crossroads Systems File Migrator, QStar Technologies wares, or Digital Reef to realize the same functionality (or more granular data movements based on data class) in a much less expensive way across Xiotech ISE platforms or generic JBODs.  Tiered storage is fine, as long as you recognize that unlike the mainframe, there are only two tiers of storage in open systems:  capture storage (rated to the speeds and feeds of the app writing the data), and retention storage (everything else).  Archival data belongs on tape or optical.  Junk data belongs in the waste basket.
  • Archiving to disk versus tape:  Bizarre that we are even discussing this.  Have we collectively forgotten everything we ever learned about the vulnerability of disk, its inappropriate and costly application to long term archive, the energy costs associated with spindles versus other media?  If ESG is recommending archive to disk, I have to wonder about the intelligence of some of its other recommendations.  (Well, I guess I do anyway…)

Okay.  I’ve said what I think.  What are the real issues on everyone’s mind when it comes to storage?  How about these?

  • When is my management going to let me purpose-build my storage infrastructure to meet the needs of my applications and business processes?  When will they realize that one size fits most products of brand name arrays don’t fit anyone’s needs very well.
  • When is the industry going to cooperate in a universal storage management scheme that makes it easy to unplug vendor A’s wares and replace them with vendor B’s?  Better yet, when will the industry stop preventing efficient management by adding a lot of crap to their array controllers that doesn’t need to be done on the array and probably shouldn’t be?
  • When is the industry going to quit equating storage management with capacity management?  Management of storage is about providing the appropriate hosting services — conceived as a well defined and highly manageable mix of hardware resources and software functions — to data over its useful life.  It isn’t just waste management:  creating an ever expanding landfill in which to deposit junk data.
  • Why scale out;  why not scale back?
  • How about abandoning terms like SAN, networked storage, tiered storage, de-duplication, archive, etc. that have become so much abused by marketeers that they mean whatever the vendor wants them to mean in his brochure?  Instead, let’s talk about what really matters:  I/O performance, data management via classification and routing policies, driving costs out of boxes of commodity spindles, using the right storage technology for the job instead of what the vendor wants us to use…

Okay.  Back to earth.  Competing with the ESG/Isilon webcast, at exactly the same hour, is my webcast for Redmond Magazine on De-Dupe.  Here are the details as they were just sent to me.

De-duplication is a hot topic in storage management, but its primary application is typically in the realm of data protection. Jon Toigo offers his thoughts on the de-duplication craze and notes a great option that will enable you to leverage de-duplication capabilities in a sensible and business-savvy way. Join us for this free webcast!

  • Date: Wednesday, June 24, 2009 at 2:00pm ET
  • Webcast: The De-duplicated Backup: Straight Talk about the Capabilities and Limitations of a Sexy New Technology
  • Speaker: Jon Toigo, Enterprise Systems Contributing Editor

The idea is to write backup data sets to an array or virtual tape library, then to apply an algorithm that squeezes the number of bits used to describe the data. In theory, this lets you build a dense backup file repository – storing more data in less space – from which individual backup files can be recovered quickly in the event of accidental erasure of, or damage to, originals.

The question is, how do you best deploy de-duplication technology? Hardware vendors want to sell an appliance or gateway. However, these products can be prohibitively expensive to purchase and costly to operate.

Does the strategy eliminate tape backup, as some vendors claim? Are there any potential compliance issues associated with the strategy that should lead you to exclude certain data from de-duplication?

Registration is here, or you can watch the ESG/Isilon cast and visit the replay of my webcast the next day.  Or you can just go to the beach and enjoy some Summertime with your kids. 

Your choice.

UPDATE —

I am now advised that the ESG/Isilon cast is at 4PM EST (I misspoke using the PST time).  So, you can blow off your workday and join both casts, or blow off the webcasts and hang out with your kids, who are likely bouncing off the walls given that there is no money for special camps and staycations have replaced vacations in many homes.

{ 2 comments… read them below or add one }

johng_isilon June 22, 2009 at 9:43 pm

Good news, Jon. Your webinar is at 2pm ET, the Isilon/ESG one is at 4pm ET, so we’d encourage people to attend both if they can. We’d also like to invite you to register and attend ours, because the idea behind this forum is to bring up the sort of questions you’ve raised (why hasn’t the industry….). If you can’t make it I’m assuming your readers will ask similarly tough questions. You titled this post spot-on…we don’t think this list of “key issues” is exhaustive by any means. On Wednesday we have allocated more than 2/3rds of the hour to audience Q&A, since “toughest question” or “key issue” is in the eye of the beholder. By bringing other (even tougher) questions to the table and engaging in unscripted discussion around them, we’re hoping this webinar brings a lot more focus to genuine real-life issues than sometimes happens in fully scripted events.

Administrator June 23, 2009 at 10:57 am

Thanks, John. I hope everyone will attend both.

Previous post:

Next post: