Tape Capacity Leaps Ahead…Again!

by Administrator on April 10, 2015

businesscontinuityImagine how amused I was when, in researching the presentation I will be delivering in London this Tuesday on Business Continuity in a Non-Stop World, I ran across a guest blog by a DRaaS vendor dissing tape.  Mr. Ledbetter of Zetta argued that you needed to use a cloud based DRaaS provider because cloud backup beats tape backup handily.  His key points:

  • Tape has a 30% failure rate.  (Simply not true.)
  • Random CRC errors lead to corrupted tape backups.  (CRC errors, last I checked, also corrupt disk mirrors.)
  • Tape backup is prone to human error. (Also true of both tape backup and other data protection processes, cloud or not.)
  • Returning the right tapes to a customer having a disaster can take days.  Failing over to a cloud DRaaS service can be accomplished within 14 seconds.  (Too many assumptions to dismiss simply.  But, again, a logic error — generalizing from one cited event.)

I wouldn’t waste my time, but you can read all of his rant here.

Better use of time, read the information below.  Hot off the presses.

FUJIFILM ACHIEVES HIGH CAPACITY STORAGE MEDIA MILESTONE WITH ADVANCED PROTOTYPE TAPE

Fujifilm demonstrates a new tape areal density record of 123 billion bits per square inch

VALHALLA, N.Y., April 9, 2015 – FUJIFILM Recording Media U.S.A., Inc., a subsidiary of FUJIFILM Corporation, the leading global manufacturer of data storage media, today announced that in conjunction with IBM, a new record in areal data density of 123 billion bits per square inch on linear magnetic particulate tape has been achieved. For the fourth time in less than 10 years, Fujifilm and IBM have accomplished record breaking storage capacities on tape; today announcing the highest capacity storage media ever achieved, including HDD, BD or solid memory NAND flash technologies. This breakthrough in data density equates to a single tape cartridge capable of storing up to 220 terabytes of uncompressed data. 220 terabytes is more than 88 times the storage capacity of the current LTO Ultrium 6 tape. A tape of this size can provide enough storage to preserve the human genome of 220 people on a single cartridge.

“With high performance computing and cloud storage services on the rise, this data density achievement is significant,” said Peter Faulhaber, president, FUJIFILM Recording Media USA, Inc. “Fujifilm and IBM are leading the technological development of advanced tape innovation that meets the market’s growing data requirements and delivers tape as the medium of choice for archival storage.”

This record breaking demonstration was achieved using an advanced prototype tape incorporating NANOCUBIC technology developed by Fujifilm, with advanced tape-drive technologies developed by IBM.

To learn more about Fujifilm and IBM’s collaboration, go to: https://youtu.be/bF07LZeCVhk

Fujifilm Technology Enhancements
Fujifilm’s NANOCUBIC technology is enhanced to increase recording density by decreasing the magnetic particle size that is essential for high recording density. Fujifilm’s original BaFe synthesis method increases the uniformity of BaFe particle size and decreases 25% of the switching field distribution (SFD), which is an important magnetic parameter for high density recording. The lower SFD leads to a high quality signal output due to the uniform magnetic property of each recorded bit. To ensure the stability of the ultra-fine BaFe particles, Fujifilm improved the magnetic coercivity, yielding an archival life of over 30 years.

A highly controlled dispersion process and newly developed chemical compound allows the BaFe particles to separate and disperse more uniformly and increases the perpendicular oriented ratio. Perpendicular orientation technology with BaFe produces a high signal to noise ratio and better frequency response. Enhanced NANO coating technology with a very smooth non-magnetic layer controls the tape surface roughness, providing a smooth magnetic layer for higher signal output. Fujifilm’s advanced servo writing technology decreases high frequency vibration of the servo tracks and enables a higher track density due to more precisely placed servo tracks.

IBM Technology Enhancements

  • A set of advanced servo control technologies that enable more accurate head positioning and increased track density.
  • An enhanced write field head technology that enables the use of much finer barium ferrite particles.
  • Innovative signal-processing algorithms for the data channel that enable reliable operation with an ultra-narrow 90nm wide giant magnetoresistive (GMR) reader.

Fujifilm will continue to lead the development of large capacity data storage media with BaFe technology to provide a cost-effective archival solution to preserve digital data.

More information about the future of big data storage can be found at www.FujifilmUSA.com/storage.

Okay, I know everybody needs to peddle their goods, but with ongoing capacity and resiliency improvements that we have been seeing from tape for the past decade, courtesy mainly of Fujifilm and IBM, Ledbetter’s claims just seem silly to me.  Congrats to the tape mavens for yet another home run.

I look forward to getting IBM’s perspective on the latest tape technology advances when I attend Edge 2015 in Las Vegas in a couple of weeks.  Hope to see some of you there too!  Registration is here.

I will be delivering five sessions at #IBMEdge 2015 covering DR for the #Mainframe #IBMz and hyper-converged, software-defined, hybrid-cloud-enabled infrastructure and other topics.

Also, for the record, I am being compensated by IBM for delivering five sessions at TechEdge and for participating in their Social Media/Blogger program.  This, in no way, colors my views as expressed in my blog posts, presentations or social media posts — but it does get me travel, room and board at the event.

 

edge2015

{ 0 comments }

Software-Defined Storage Meets LTFS Tape

by Administrator on April 1, 2015

IBM Interconnect logo artContinuing my interview from IBM InterConnect 2015 with Clod Barerra, Distinguished Engineer and Chief Technical Strategist for IBM System Storage, our conversation took an unconventional direction.  Given the propensity of SDS to “flatten” storage infrastructure (eliminating tiers of storage by replacing them with simple direct-attached storage nodes), I was curious to learn how Big Blue would bring together hyper-converged with technologies such as LTFS tape to support (at least) data archiving and disaster recovery protection for data.  The need for a protected archival repository does not go away because a company may elect to go back to direct-attached (virtual SAN or not).

Clod proved to be extremely knowledgeable about LTFS tape and proved to be something of an advocate of hybrid virtual SAN/tape architecture…

 

 

We concluded our interview with Clod by getting his take on how software-defined storage will ultimately roll out.  He notes that, as was the case with SANs, there will likely be many reference model architectures to meet multiple workloads and use cases.  But, he believes that these will ultimately collapse inward to make for only a few well-defined reference models.  Getting there will be…well…interesting (and likely expensive!)

 

 

Again, we thank Clod Barerra for spending time with us for this informative interview.  I am looking forward to chatting with him again — perhaps in April at the IBM Edge 2015 event.  For anyone interested in Edge, where I will be speaking and hanging out in the Social Media Lounge, there is an early bird registration deal right now:

Register on or before April 5 & save on z Systems Education Sessions at #IBMEdge 2015  #Mainframe #IBMz

Also, for the record, I am being compensated by IBM for delivering five sessions at TechEdge and for participating in their Social Media/Blogger program.  This, in no way, colors my views as expressed in my blog posts, presentations or social media posts — but it does get me travel, room and board at the event.

 

edge2015

{ 0 comments }

Getting Ready for Another IT-SENSE Brown Bag Webinar

by Administrator on April 1, 2015

softwaredefinedsnakeoilMark your calendars for High Noon (Eastern Time) Wednesday 8 April — one week from today — for the next Brown Bag Webinar from IT-SENSE.  This time, we are going to tackle the topic of Avoiding Snake Oil in Software-Defined Storage.  My guest will be Anatoly Vilchinsky of StarWind Software and together we will try to distill the hype from the hyper-converged, the marketecture from the architecture, and the actual benefits from the promises of the software-defined storage patent medicines that seem to be popping up everywhere.

To register, please click on this link or the picture below.

sdsstoragebbl

While you are at the site, check out some of our other Brown Bag webinars, which are still timely and, some say, useful.  And stand by for the next (and somewhat delayed) issue of IT-SENSE which will focus on Infrastructure and Data Management.

We hope to see you on Wednesday for what promises to be an irreverent, humorous and useful analysis of the state of software-defined storage.

{ 0 comments }

At IBM InterConnect 2015, I had the opportunity to sit down with Clod Barerra, Distinguished Engineer and Chief Technical Strategist for IBM System Storage, for an ad hoc discussion of software-defined storage, hyper-converged infrastructure, and IBM’s evolving strategy for both.  We were supposed to focus on XIV, which IBM has returned to a pure software state (what it originally was when they bought the company) and rebranded as part of Big Blue’s Spectrum Storage portfolio.  But, when I get together with Clod, it is impossible to stay with just one topic.

The interview went on and on, consuming nearly an entire tape (yes, I said tape).  We cut it down to create this multi-segment interview that is well worth the time to watch in its entirety if you want to get inside the head of IBM with respect to software-defined storage and the future of monolithic storage arrays and other legacy devices and topologies.

We start with the basic question about IBM’s decision to take XIV back to its software layer roots…


 

Clod goes on to answer our questions about the juxtaposition of Spectrum Accelerate (XIV) and z13 Mainframe’s capability to host 8000 virtual machines.  Is there any intention to create a software-defined storage architecture for use with a mainframe?  More interesting to me is the answer he gives to the idea that software-defined storage does not include capacity management, only services management.  I disagree with this definition which seems to be coming from a certain hypervisor vendor that is 80 percent owned by a storage array maker…

 

To my delight, IBM’s announcements around Spectrum Accelerate (XIV) and software-defined storage were devoid of promises that SDS would make virtualized apps run faster, which, while bogus, is an oft repeated claim of other SDS vendors.  His response is very interesting…

 

Midway through the clip, we told him of our concern that a lot of software-defined storage implementations, so called hyper-converged storage, flatten out the storage infrastructure thereby negating the benefits of tiering.  We wanted IBM’s take on this, and got a coherent response.

I will post the final two parts of the interview tomorrow.  I hope you will find them as interesting as I do and will join me in thanking Clod for sharing so much in this interview.

I also look forward to seeing Clod and other IBM Systems Storage experts at Edge 2015 this April.  I will take this opportunity again to pass along info about early bird registration as I hope to see many of my readers at the show.

Register on or before April 5 & save on z Systems Education Sessions at #IBMEdge 2015  #Mainframe #IBMz

Also, for the record, I am being compensated by IBM for delivering five sessions at TechEdge and for participating in their Social Media/Blogger program.  This, in no way, colors my views as expressed in my blog posts, presentations or social media posts — but it does get me travel, room and board at the event.

More Clod Barerra shortly.

 

{ 0 comments }

Thank you…Thank you…Thank you!

March 31, 2015

Mucho gracias!  Molto grazie! Merci beaucoup!  Danke schoen!  Arigato gozaimasu!  Большое спасибо!  And so forth… To everyone who made today’s webcast, A Hype-Free Guide to Hyper-Converged Infrastructure, a big success.  I am told that attendance was four times what was expected and most everyone stayed to the end.  Sponsor DataCore Software was delighted (they called me a […]

Read the full article →

From InterConnect to Edge 2015

March 31, 2015

Seems like I was just in Las Vegas covering the IBM InterConnect event.  It was a great show and a remarkable cobble of new school and old school types working together to address the needs of organizations in the face of the M-Commerce revolution.  Now that I have had some time to think about everything […]

Read the full article →

Why…Of Course I will Present on Hyper-Converged Infrastructure

March 31, 2015

I returned from Paris a week or so ago, where I got to present about Hype and Hyper-Converged Infrastructure to about 250 DataCore Software partners and customers from a perch on the first stage of the Eiffel Tower.  It was a mind blowing experience and seemed to be well received by the attendees.  So much […]

Read the full article →

Getting My Presos…and Luggage…Together

March 9, 2015

Looks like a crazy travel schedule coming up.  Tomorrow:  I am visiting Ft. Lauderdale to conduct some video interviews with George Teixeira and Ziya Aral, my friends of years and the bosses of DataCore Software.  I am looking forward to getting their take on software-defined storage and virtual SAN and to catching up with all […]

Read the full article →

Software-Defined…well…Everything

March 9, 2015

 It seems like most of the storage companies I am encountering these days are falling in line with the software-defined aka hyper-converged aka virtual SAN aka direct-attached-to-clustered-servers trope.  Not that all of these products or technologies are all the same, mind you, but everyone seems on board with the idea of pulling value-add functionality off […]

Read the full article →

Gearing Up for IBM Interconnect Day 2

February 24, 2015

I have been reviewing sessions today at the IBM Interconnect show in Las Vegas and have ID’d one called The New z13: Redefining Digital Business through the Integration of Cloud, Mobile and Analytics that I will attend in about an hour.  This might be a bit of a rehash of the presentations I saw at the […]

Read the full article →