https://slotsdad.com/ - casino online slots

Day 3 at IBM Interconnect

by Administrator on March 22, 2017

On this, the third and final day of my work at IBM Interconnect, I have enjoyed several intelligent discussions and sessions that have my grey cells firing.  This morning, I attended a session conducted by Steven Dickens of IBM (video interview with Dickens to follow shortly) and Guy Shone of Explain the Market (a UK technology economist) that threatened to explain how a CFO might decide to authorize the migration of core applications to the cloud.

I spent about an hour before the session getting to know Shone, who strikes me as a bright and hype-free analyst.  He writes a column in a London paper and is a frequent commentator on the British airwaves regarding the business value of technology — one of my favorite subjects.

What he said got me thinking, which is what IBM says we all should do.  There are signs all over the IBM Interconnect venue advising us to THINK, and others to EAT, ASK, SLAY (???), etc.  For a moment, I thought they were doing an homage to John Carpenter’s classic film, They Live, and I kept expecting to see signs like OBEY, or MARRY & REPRODUCE. 

But I digress.

Shone spent a lot of time noting that a lot of change is going on, perhaps too much to neatly define, categorize or model in order to predict outcomes.  CFOs tend to be conservative and slow to jump on new tech, preferring to play it safe and take advantage of tech that demonstrates mainstream adoption and generally accepted value.  Except, in these days of disruptive micro businesses and unbridled mobile commerce, waiting is not always the smart move.  There may be some first mover advantage.  A conundrum.

He was critical, as am I, of what substitutes these days for sanguine (economic) analysis.  There is a major (and deserved) trust gap among CFOs in reports from industry analysts (who too often sell their opinions to vendor clients), to stilted social media endorsements (how many likes a product or service receives can be distorted deliberately by vendors), and even in use cases (which rarely provide replicable outcomes).  What’s the poor CFO to do?

The amount of information to digest to get at a kernel of truth in analysis is huge and growing.  Signal to noise is getting pretty awful.  Vendors toss around statistics that were created under contract to other vendors which they privately disparage as so much BS.  It’s hard work to be a CFO who is constantly under pressure to let loose the  hounds of technology but always cautious not to let the dog bite the hand that feeds it.

So that’s one thing that has me thinking.  Is there a model with dependable and replicable outcomes that a CFO can leverage to guide decisions like moving core apps to the cloud?  That question was never answered in the session, by the way.  At best, Steven Dickens offered to make consultations available to CFOs who were contemplating a move to IBM Clouds.

As I listened to Dickens extol the virtues of  mainframe cloud technology, I was hearing something a bit more subtle.  Last year and the year before, Interconnect presenters showed “hybrid cloud” slides that depicted traditional organizations running most core apps (legacy apps, systems of record, etc.) on premise.  To the extent that public cloud services were leveraged at all, it was for ancillary uses — like storage of archival data or for DR as a Service, that sort of thing.  IBM was fond of saying that larger firms were not ready to move wholesale to the cloud.  Hybrid was a way for them to dip their toe in the water before diving in.

Not now, apparently.  This year, IBM seems to have redefined hybrid cloud to mean the placement of some core apps in the public cloud (an IBM one) and some on premise.  They have spun the discussions of the sessions at Interconnect to include lots of references to “cloud native” apps and why companies should be shifting their development efforts toward this model.

Yet, there is no model to encourage CFOs to embrace such an endeavor. There are, theoretically, some short-term gains.  If getting skilled workers to come to your firm wherever you happen to have put down stakes is a hassle, that may go away by going to the cloud, where communities of developers and operators live.  So there’s that OPEX value, as there is with any outsourcing arrangement.

And of course, if you no longer want to pay the price for equipment upgrades, that will go away with the cloud…or with any outsourcing arrangement…too.  So there is some CAPEX value there.

But these are older use cases and not always guaranteed to offset the costs of cloudification, including denial of service attacks, outages, security breaches, etc.  They certainly do not speak to the granular issues of the overwhelming diversity of cloud technology initiatives that force you to hitch your wagon to an industrial cloud service provider’s star.  Is joining an IBM Cloud the equivalent of a lock-in?  Do we need to be concerned about that or is it the price for doing things in a way that delivers predictable outcomes?  That has been a question on my mind, honestly, since the Open Technology Summit that I attended at the start of this show.

Still Dickens has a nice Midlands voice — a true cloud whisperer.  And while he might not have a rigorous economic model for cloud adoption yet, he might just sweet talk some CFOs to increase the 17% of revenues that IBM currently derives from its cloud services.

BTW, I am attending IBM Interconnect as a guest of IBM, who has paid for event registration, transportation and lodging.  My words here, however, are my own.

 

{ 0 comments }

Day 2 at IBM Interconnect

by Administrator on March 21, 2017

Actually, this post begins yesterday, at the end of the day.  Weary from jet lag, belly full from a visit to the Mandalay Bay Noodle Shop and a bowl of their very good Tom Yum Kong soup, I was dragging my lug-a-long back to the Delano through the garage short cut when I noticed a limo pulling up.  Out popped a gaggle of dark suited security guys and a young woman.  I recognized her immediately as the CEO of IBM, Ginni Rommity.

“Hey, Ginni,” I said, in a voice that I hoped would not excite her entourage, “You put on a great show!”

She looked confused at first, then seemed to realize that I was referring to the conference that I was attending and she was to speak at the next morning:  IBM Interconnect.  If I had just flown in from doing a deal with the Chinese to make IBM one of the largest cloud service providers in Asia, I probably would have been a bit slow to understand the words of a total stranger.

Anyway, having made such a grand new friend, I looked forward to hearing the Chairman’s Address first thing this AM.  My friend, Ginni, took the stage to riotous applause from something like 20,000 attendees.  She seemed to have recovered well from her jet lag.

Her opening was well received too.  She spoke about the integral connection between cognitive technology and clouds, two terms that seem to be more from the domain of marketecture than architecture.  I decided to hang with it and hear her out.

She seemed intent on making the case that enterprise-grade clouds from IBM were different from everyone else’s wannabe clouds.  Moreover, she dwelled heavily on the productization of cloud and cognitive.  IBM’s job, she said, was to help companies extract value from their data — and to keep it to themselves for competitive advantage.  True enough, but I found myself longing for a bit more socially aware content.  Something about making the world a better place through computing, cloud, cognitive, whatever. 

I remembered past events where the agenda seemed to be written by California Avocado Dip Creamy Smooth Liberals. I remember long sessions on “green computing.”   Or a talk covering advances that were going to unwrap the enigma of the DNA chain so that illness could be eradicated.  Or something about finding water in Saharan Africa.  IBM with a conscience.

But the CEO instead focused on capitalism.  You invest in technology to gain advantage over your competitor in the market.  If your success puts them out of business, so be it.  Survival of the fit is the name of the game.

Her first guest seemed to reinforce this message, which while it makes perfect sense, did not exactly ring the bell of my inner 60s counter-culturalist.  The AT&T CEO seemed likable enough, but together he and Ginni seemed to be like two Caesars, divvying up the world between the two of them.

I worried that this was one of those situations in which the times dictated the message.  Just as the current politics seems to be hell bent on undoing all social advances for, say, the past 100 years, maybe tech companies felt a need to frame their value in strictly zero sum game terms.  Forget society.  Profit rules.

Then Ginni was joined by the CEO of SalesForce.com.  I had never heard him speak before, but he struck me as a very likable fellow.  I am not sure what kind of a business guy he is, having taken 11 years to build his brand to where it is today, but he said things that quelled a lot of my concerns.  He had a good grasp of capitalism, but also vocalized concerns about the worker bees of the world, whether they would have the right skills going forward to land the kinds of work that would eliminate “income uncertainty.”  He even used the alphabet soup term, LGBTQ, and gave a shout out to equal pay for women.  This was something I longed to hear given that the preponderance of my children are female and/or gay.

A pleasant fellow from H&R Block took the stage next to extol the value of IBM technology and its assistance in applying Watson to tax preparation, a promise in a Superbowl Ad last minute and  apparently before the product was ready to go .  It was a good story; IBM was a hero of just in time delivery.

Then finally, Ginni chatted with the Royal Bank of Canada, who’s representative said his company had become an IT shop with a banking sign on the outside door.  Fair enough.

Things were running long and the business humor was getting a bit thin.  I started to eye the exit, but I remembered that Ginni’s final guest, would be the founder of Girls Who Code. 

The entry of the founder of the group, then some of her graduates provided that chance for IBM to show that it was not only a capitalist cognitive-and-cloud tool company, but also a community comprised of people with real (not virtual) hearts who actually cared about the challenges faced by the next generation.

We got to hear from the founder, first, who noted that more than 40,000 girls had been taught to code by her organization, with the backing of Big Blue, and that she had set a goal to train 1 million girls to code by 2020.  Big applause.

Then, the miraculous happened.  Three outstanding young women took the stage and told their storage, with pictures on the big screen of them as the younger children they were when they entered the program.  They were given paid internships by IBM at the end of the talk.  Tears welled.

So, in the space of about 97 minutes, Ginny had gone from the Iron Maiden of technology to the Patron Saint for all of the girl programmers of the future.  Her closing remarks were filled with love and joy.

A pretty good opener for day 2.

Again, IBM has paid for my attendance at this event, but they do not control my words.  I thank them for the opportunity and hope that my girls will take an interest in Girls Who Code and other worthwhile IBM-sponsored efforts. 

 

{ 0 comments }

Day 1 at IBM Interconnect 2017

by Administrator on March 20, 2017

After a long flight, I decided to attend the pre-IBM Interconnect event, the Open Technology Summit (#IBMOTS) at Mandalay Bay Resort in Las Vegas.  It was a good choice.

I admit to being somewhat remiss in keeping up on all the open initiatives out there — there are so many and little to guide me to the ones that are most important or significant.  I understand that a lot of work is being done on platforms, on API integration, on cloud-native apps and operating systems, and so forth.  But what always seems to be missing is any discussion of data, which is the whole reason for computing in the first place.

In the old days (when I was a data center newbie), we started development by looking at the process we were automating, considering all of the inputs and all of the outputs (both of which were data) and trying to document the kinds of transactions that would occur, frequency of data accesses and updates, useful life of the data, that sort of thing.  Then — and only then — we would start designing the infrastructure to support the data and its performance requirements, scaling requirements, etc.  I learned to do this, as did most folks I knew, from Big Blue:  systems development lifecycle methodology.  Oh, and we called it “data processing” — not “information technology” — because data processing was the sine qua non of corporate computing. 

Okay, so those were the old days.  Now we are on to cloud computing, with its many nuances and integration challenges.  I get that.  But the reason for doing it remains:  it’s all about the data.  Disappointingly, there was virtually no reference to this in the Open Technology Summit, just a lot of lip service to openness and communities and collaborative development by vendors who appear to be committed to openness only on the surface.  (An engineer told me quietly that OpenStack was of interest to IBM only insofar as the OpenStack model could be ported to mainframe hosting: “You can create a cloud infrastructure out of commodity components, to be sure.  But then you just have a commodity cloud, not a mainframe cloud with all of the dependability, resiliency, scalability or performance that mainframes can deliver.”)

So there’s that.  Open doesn’t really mean open.  Collaboration doesn’t mean marriage.  I get it. 

Still the lack of discussion about data grated on me.  Especially since every large cloud vendor I have been talking with is ringing his or her hands at the prospect of the 10-60 ZB data deluge on the horizon.  Heck, clouds need to do things cheaper, with less risk, and better in terms of performance than what companies can do in their own data centers if they are really going to take off.  I wanted to hear somebody — anybody — tell me how much better the management of data was going to be in a cloud.  I didn’t hear it.

Until Monday Morning.  Bless his heart, IBM’s opening keynote at Interconnect was all about data first.  Once Jeff Moody, GM for Twitter, vacated the stage, Arvind Krishna took the stage and gave IBM’s viewpoint.  The first word on his opening slide was “data.”  Integrating data, cleansing data so it could be analyzed, storing data were the key challenges of cloud, he said.

Amen, brother.  Great presentations followed, each exposing another dimension of the need for data management — to protect data, to secure it, to preserve it, etc.  Heavy emphasis was placed on Watson and cognitive computing to help automate the data management process and there was even a mention of a new technology offering, a Cloud Object Storage something or other, that Big Blue was pushing into the market.  Little discussion of the internals, only that it was the next big thing in storing mass quantities in the cloud — or in several clouds — or on premise and in the cloud — with a common management model.  Sounds good.  I want details — especially how it will leverage tape, which it must if it is to handle the Zettabyte Apocalypse.

Anyway, good show thus far.  Looking forward to more sessions and to interviewing the movers and shakers at IBM.

BTW, I had my registration, travel and lodging comped by IBM in exchange for live tweeting and doing some blogging around the event.  However, all opinions here are my own.

{ 1 comment }

Watch Out, INTEL. Computation Defined Storage Has Arrived.

by Administrator on February 23, 2017

In a few hours, there will be a crescendo of noise around, of all things, a hardware platform. Yup, in these days of disdain for all commodity hardware and widespread embrace of software-defined everything, a major hardware event is about to happen.

The evangelists for the new tech are three faces that have been around the storage industry for about 30 years: Brian Ignomirello, CEO and Founder of Symbolic IO, Rob Peglar, Symbolic’s Senior VP and Chief Technology Officer, and Steve Sicola, Adviser and Board Member of the company. Together, they are introducing an extraordinary advance in server and storage technology that could well change everything in the fields of high performance computing, silicon storage and hyper-converged infrastructure. They call their innovation “Iris.”

Iris™ stands for INTENSIFIED RAM INTELLIGENT SERVER and it is trademarked for good reason.

Under the hood, there is so much intellectual property that I had to sign a pile of NDAs just to get an advanced look when I flew to Symbolic IO headquarters in what used to be Bell Labs last week. Fortunately, you don’t need the non-disclosure because, as of midnight tonight, Iris is going to get a lot of exposure from the usual news outlets and analyst houses.  (It goes general availability next month.) 

Why?

Simply put, Iris changes the game on so much of what we take for granted today in computer design, server architecture and storage operations. Collectively, the innovations in Iris, which have been in development since before the company’s formal founding in 2012, stick a hot poker in the eye of INTEL, NVMe, and the whole HCI crowd.

With the introduction of Iris, it is as though server and storage technology just went through what Gail Sheehy called a “passage” or Erickson, Piaget and Kohlberg termed a “stage of psycho/social development.” Just as healthy humans move through stages in life, usually signaled by a crisis, in which they reconsider past assumptions, discarding those acquired from parents, peers and society that do not seem to have relevance to them and embracing new truths and directions for the future, so it is with Iris and the tech industry.

The crisis is real. Things are in disarray for tech consumers and vendors alike. We are creating data much faster than we can create the capacity to store it with current technology. We want to be able to share and collaborate using data, but the latencies of reads, writes and copies are getting in the way and hypervisor virtualization has stressed out the IO bus. We grasp at straws, allowing INTEL to define NVMe as a de facto standard because vendors want to push silicon into data centers tomorrow and relying on each flash storage maker to define their own device drivers and controller logic was delaying adoption, compromising vendor profitability and exposing the whole silicon storage market to rampant balkanization.

Iris is what happens when the crisis above forces good engineers to question old assumptions and to discard those that no longer apply. For example…

  • Why are we using simplistic common binary to store (logically and physically) bits on storage media? Why not use a more elastic and robust algorithm, using fractals for example, to store more data in the same amount of space? That is analogous to the way data is stored using DNA, which packs far more content into a much smaller space.
  • Why are we pushing folks to deploy flash memory on a PCIe bus and calling that a “huge improvement” over installing flash behind a PCIe bus-attached SAS/SATA controller? While doing so yields a performance improvement, isn’t that the same dog, with different fleas? Why not put storage directly in the memory channel instead?
  • Why do we continue to use cumbersome and self-destructive file systems that overwrite the last valid copy of data with every new save, a reflection of a time when storage cost several hundred thousand dollars per gigabyte? Why not use a richer recording algorithm that expedites first write, then records change data for subsequent versions in a space optimized manner?
  • And in these days of virtual servers and hypervisor computing, why don’t we abandon silos of compute and storage created by proprietary hypervisors and containers in favor of a universal, open workload virtualization platform that will run any virtual machine and store any data?
  • And finally, why pretend that flash is as good or as cheap as DRAM for writing data? Why not deliver write performance at DDR4 speeds (around 68 GB/second) instead of PCIe G3 throughput speeds of 4.8 GB/second)?

Ladies and gentlemen, welcome to Iris. Those who read this blog regularly know that I am as critical of proprietary hardware as the next guy and have welcomed the concept, if not always the implementation, of software-defined storage as a hedge against vendor greed. But, from where I am standing, this “Computation Defined Storage” idea from Symbolic IO has so much going for it, I can’t help but find myself enamored with the sheer computer science of it.

They had me at I/O.   But they are holding my attention for the many other innovations that they have put into the kit, including a remarkable, new DRAM-3D NAND hybrid storage target, a rich open hypervisor, an OS that changes the game with respect to data encoding and data placement, and a REALLY COOL technology for data protection via replication called BLINK. 

Watch this space for more information about Iris.

{ 0 comments }

Is It 2017 Already?

February 23, 2017

Like the old saying goes, “Time flies when you’re having fun.”  I am not sure whether it has all been fun, but I have been extraordinarily busy for the past few months…as my absence here might suggest. There were ups and downs as 2016 came to a close.  We have had two really good day-long […]

Read the full article →

Cure for the Post-Halloween Sugar Crash: Ken Barth of Catalogic

November 1, 2016

Halloween has passed.  In its wake is an inevitable sugar crash.  If you have kids, you welcome this side effect to curb all of that frenetic energy and to provide a spate of peace and quiet.  In IT, a sugar crash often follows the acquisition of new gear or the implementation of a new application or OS.  […]

Read the full article →

Vblogs from the Edge: The Only Solution to the Zpocalypse – Tape

October 31, 2016

Just in time for Halloween, long time friend Ed Childers, who also happens to be IBM’s LTFS Lead Architect and Tape Development Manager, agreed to be interviewed at this year’s IBM Edge 2016.  Ed caught us up on all things tape, from the realization (finally) of the long predicted Renaissance in tape technology to the […]

Read the full article →

Vblogs from the Edge: Zoginstor talks Software-Defined Storage

October 31, 2016

When you talk to someone who wears the handle “Vice President of Storage and Software Defined Infrastructure Marketing,” you would think that you are going to get an earful about SDS and hyper-converged infrastructure, and maybe some hype about how it is the shiniest of shiny new things.  However, IBM’s Eric Herzog (known on Twitter as @Zoginstor) […]

Read the full article →

Vblogs from the Edge: Stouffer Compresses Some Thoughts

October 31, 2016

Brandishing a title like Director, Storwize Offering Manager and Business Line Manager, one would expect Eric Stouffer to spend his 15 minutes of fame (about the time it takes to shoot a short interview) waxing philosophical about the benefits of data compression (what Storwize technology is all about).  The interesting thing about Stouffer is that […]

Read the full article →

Vblogs from the Edge: IBM Storage in Nordic Shops

October 31, 2016

At IBM Edge 2016, I had the good fortune to cross paths with Mathias Olander, a Software Defined Solutions Sales Representative for Big Blue in the Nordic region.  A brief discussion about the appetite for software-defined technologies in Northern Europe became a bit more philosophical, perhaps because of the frequent pauses we needed to take in […]

Read the full article →