Wednesday, September 30, 2009

Experts on cost/benefit analysis: "Huh?"

I'm looking at a recently published deck from an organization called CIO Executive Board. (Despite its high-falutin' name, it's really just another consultancy.) I thought the study's title, "Rethinking IT Funding Models" would be of interest to readers of this blog.

But on the slide headlined, "Decision rules for enhancing cost/benefit transparency," they lost me. The whole presentation became, to me, an excuse to rail against all the fuzzy thinking that sidetracks any quantitative approach to understanding IT as a business. Basically, CIO Executive Board suggests you ask four questions:

  1. Do we understand the costs of the IT services we deliver?
  2. To what extent is the cost of a specific IT service variable in the near term (within one year)?
  3. Where can cost-transparency [sic] change behavior?
  4. What is the most efficient means of providing cost transparency?

Now, what's wrong with this picture? A lot.

First, It's a sad state of affairs if a CIO doesn't understand his or her service delivery costs. OK, so it's a sad state of affairs. But am I the only one bothered that this question even has to be asked? I don't accept the premise that this is a useful yes/no question. What might be more helpful would be, "What is it about the costs of the IT services we deliver that we don't understand? Maybe you understand servers and storage in intricate detail, but your network costs leave you stymied. Maybe you know to the penny how much you're spending in each domain, but you'd be hard-pressed to map it to the production and sales functions in your overall enterprise.

As for the second question, I like to assume that anyone I talk to whose title begins with the letter C knows the definition of "short term". I also like to think they know what "variable" means, but I'm not sure whoever wrote this slide ever had a job starting with C (unless maybe custodian, and not in the financial sense). Examples of variability drivers given in sub-bullets include tiering labor, rationalizing procurement, outsourcing and stretching refresh cycles. No, Charlie. That's not what "variable" means. These are things you can do to drive down your current year's costs, true. And I'm not saying we shouldn't be thinking about these things. But "variable" to me (and most other Accounting 101 veterans) suggests the amount of money accreted or decreted if the unit of workload changes. If you have to add another database instance, how much more will that cost you? Alternately, if you can drive down the number of database instances by 1, how much does that save you? There can be some stickiness -- it's possible that incremental costs of adding workload would be realized this year, but it might take until next year to realize the savings if that unit of workload went away. The beauty of cloud is that it helps move hitherto fixed costs into the variable realm, so you can then control costs by controlling your workload.

But that leads to another problem with the CIO Executive Board report: There's nothing in there about quantifying benefits. Nothing. At all. And you can't explore "cost/benefit transparency" unless you can suggest some visibility into the benefit side of the vinculum. (By the way, and this is just a nit: I hyphenate rather than slash "cost-benefit" because, if we were to be mathematically pure, we'd be calling it "benefit/cost". That's the ratio we're really trying to identify.)

The slide's last two questions, though, are spot on: How do we use our knowledge of costs to drive behavior, and how do we make sure these costs are comprehensible to the users? These are going to be huge issues as we move to the cloud. The provider that can meter services best -- and by "best" I mean translated into service catalog items to which line-of-business executives can relate -- will be the provider that succeeds.

To that end, I look forward to the CIO Conference Board's October 29 presentation on Business-Focused Metrics of IT Value. I trust it'll be more informative.

Have a better day,


PS: If you haven't seen it yet, please check out the IBM Global CIO Study. I'll be blogging on that later this week.

Tuesday, September 22, 2009

Bringing cloud metrics down to earth

Hmmm, I don't know how much of these numbers in front of me I'm allowed to share. A lot of it is IBM-confidential. But I'll try to walk the thin line.

It's public knowledge that IBM is a player in the cloud computing space. It's also public knowledge that IBM is not a huge player. And it's an easy guess that IBM wishes it had a bigger slice of the pie. Given all that, you could infer -- correctly -- that IBM is cooking up some ideas that it expects to vault it over the competition.

And for any more about that, you'll have to wait for the announcement.

Still ...

I can tell you to expect a cascade of new offerings through 2010 and 2011. Early days will focus on middleware, open source, support, security & compliance, storage and server virtualization. There's more, of course, but those are the first-quarter highlights.

I can also tell you that IBM expects to be able to compete on price in this arena, which is a departure for a company that's convinced it's the Tiffany's of computers. I like IBM hardware myself, but I'll be honest with you: The most appealing thing about this ThinkPad T60 I'm pounding away at right now is that Big Blue gives it to me for free.

So what kinds of costs are IBM keeping an eye on? In the platform-as-a-service world, IBM is focused on price per hour for a standard computing unit -- processing, memory and storage -- and internet costs per gigabyte, which will be different inbound or outbound.

IBM expects to capture the lead on these costs by:
  1. carefully selecting the right processors, not necessarily the most powerful;

  2. optimizing storage alternatives, for which customers' workloads will determine selection;

  3. improving energy efficiency; and

  4. paying close attention to network architecture.

Big Blue may be at a competitive disadvantage now, and not for organic reasons. IBM has the people, the hardware, the software, the network bandwidth and the facilities to make a major splash in cloud computing. The only disadvantages that IBM are that it is a) big and b) old. Laser focus and nimble thinking aren't net exports from Armonk. IBM was late to the game. But it's here now.

Expect new hardware to be committed and, if necessary invented. Expect IBM to eat its own cooking when it comes to data center solutions. Expect Tivoli and the rest of IBM Software Group to fuel the catch-up drive. And expect IBM to do what it does better than anyone else, and always has: Keep throwing more and more people at a problem until the right skills in the right quantities are found to solve it.

If you've ever met me personally, you know I'm not a rah-rah IBMer. I like working there. I like the processes we use to come up with solutions. But I'm not some human resources flack who doesn't believe the company is capable of being short-sighted, confused or just plain wrong. But I'm a happy shareholder, and I have all the confidence in the world that, as far as the cloud computing market is concerned, IBM is going to spend 2010 in the passing lane.


Now I'm sure I've shared too much. I've made sure that I haven't shared any actual unit prices or costs with you, but still some functionary in a blue suit is probably going to give me a stern lecture. I'll nod along until he's out of breath. Ultimately, I won't get into any serious trouble due to this post for one reason:

I'm doing IBM a favor.

By telling you the metrics that IBM is tracking, I'm also suggesting to you that these are the measures you ought to track as well -- the ones that you should be comparing side-by-side as you sort through prospective vendors.

And I wouldn't be giving you a metric if IBM wasn't poised to beat everyone else in the game at it.

Have a better day,


PS: Sorry for the radio silence recently. I had to take some family leave, then came a very busy holiday season. I'll do better going forward.