Wednesday, September 30, 2009

Experts on cost/benefit analysis: "Huh?"

I'm looking at a recently published deck from an organization called CIO Executive Board. (Despite its high-falutin' name, it's really just another consultancy.) I thought the study's title, "Rethinking IT Funding Models" would be of interest to readers of this blog.

But on the slide headlined, "Decision rules for enhancing cost/benefit transparency," they lost me. The whole presentation became, to me, an excuse to rail against all the fuzzy thinking that sidetracks any quantitative approach to understanding IT as a business. Basically, CIO Executive Board suggests you ask four questions:

  1. Do we understand the costs of the IT services we deliver?
  2. To what extent is the cost of a specific IT service variable in the near term (within one year)?
  3. Where can cost-transparency [sic] change behavior?
  4. What is the most efficient means of providing cost transparency?

Now, what's wrong with this picture? A lot.

First, It's a sad state of affairs if a CIO doesn't understand his or her service delivery costs. OK, so it's a sad state of affairs. But am I the only one bothered that this question even has to be asked? I don't accept the premise that this is a useful yes/no question. What might be more helpful would be, "What is it about the costs of the IT services we deliver that we don't understand? Maybe you understand servers and storage in intricate detail, but your network costs leave you stymied. Maybe you know to the penny how much you're spending in each domain, but you'd be hard-pressed to map it to the production and sales functions in your overall enterprise.

As for the second question, I like to assume that anyone I talk to whose title begins with the letter C knows the definition of "short term". I also like to think they know what "variable" means, but I'm not sure whoever wrote this slide ever had a job starting with C (unless maybe custodian, and not in the financial sense). Examples of variability drivers given in sub-bullets include tiering labor, rationalizing procurement, outsourcing and stretching refresh cycles. No, Charlie. That's not what "variable" means. These are things you can do to drive down your current year's costs, true. And I'm not saying we shouldn't be thinking about these things. But "variable" to me (and most other Accounting 101 veterans) suggests the amount of money accreted or decreted if the unit of workload changes. If you have to add another database instance, how much more will that cost you? Alternately, if you can drive down the number of database instances by 1, how much does that save you? There can be some stickiness -- it's possible that incremental costs of adding workload would be realized this year, but it might take until next year to realize the savings if that unit of workload went away. The beauty of cloud is that it helps move hitherto fixed costs into the variable realm, so you can then control costs by controlling your workload.

But that leads to another problem with the CIO Executive Board report: There's nothing in there about quantifying benefits. Nothing. At all. And you can't explore "cost/benefit transparency" unless you can suggest some visibility into the benefit side of the vinculum. (By the way, and this is just a nit: I hyphenate rather than slash "cost-benefit" because, if we were to be mathematically pure, we'd be calling it "benefit/cost". That's the ratio we're really trying to identify.)

The slide's last two questions, though, are spot on: How do we use our knowledge of costs to drive behavior, and how do we make sure these costs are comprehensible to the users? These are going to be huge issues as we move to the cloud. The provider that can meter services best -- and by "best" I mean translated into service catalog items to which line-of-business executives can relate -- will be the provider that succeeds.

To that end, I look forward to the CIO Conference Board's October 29 presentation on Business-Focused Metrics of IT Value. I trust it'll be more informative.

Have a better day,


PS: If you haven't seen it yet, please check out the IBM Global CIO Study. I'll be blogging on that later this week.

Tuesday, September 22, 2009

Bringing cloud metrics down to earth

Hmmm, I don't know how much of these numbers in front of me I'm allowed to share. A lot of it is IBM-confidential. But I'll try to walk the thin line.

It's public knowledge that IBM is a player in the cloud computing space. It's also public knowledge that IBM is not a huge player. And it's an easy guess that IBM wishes it had a bigger slice of the pie. Given all that, you could infer -- correctly -- that IBM is cooking up some ideas that it expects to vault it over the competition.

And for any more about that, you'll have to wait for the announcement.

Still ...

I can tell you to expect a cascade of new offerings through 2010 and 2011. Early days will focus on middleware, open source, support, security & compliance, storage and server virtualization. There's more, of course, but those are the first-quarter highlights.

I can also tell you that IBM expects to be able to compete on price in this arena, which is a departure for a company that's convinced it's the Tiffany's of computers. I like IBM hardware myself, but I'll be honest with you: The most appealing thing about this ThinkPad T60 I'm pounding away at right now is that Big Blue gives it to me for free.

So what kinds of costs are IBM keeping an eye on? In the platform-as-a-service world, IBM is focused on price per hour for a standard computing unit -- processing, memory and storage -- and internet costs per gigabyte, which will be different inbound or outbound.

IBM expects to capture the lead on these costs by:
  1. carefully selecting the right processors, not necessarily the most powerful;

  2. optimizing storage alternatives, for which customers' workloads will determine selection;

  3. improving energy efficiency; and

  4. paying close attention to network architecture.

Big Blue may be at a competitive disadvantage now, and not for organic reasons. IBM has the people, the hardware, the software, the network bandwidth and the facilities to make a major splash in cloud computing. The only disadvantages that IBM are that it is a) big and b) old. Laser focus and nimble thinking aren't net exports from Armonk. IBM was late to the game. But it's here now.

Expect new hardware to be committed and, if necessary invented. Expect IBM to eat its own cooking when it comes to data center solutions. Expect Tivoli and the rest of IBM Software Group to fuel the catch-up drive. And expect IBM to do what it does better than anyone else, and always has: Keep throwing more and more people at a problem until the right skills in the right quantities are found to solve it.

If you've ever met me personally, you know I'm not a rah-rah IBMer. I like working there. I like the processes we use to come up with solutions. But I'm not some human resources flack who doesn't believe the company is capable of being short-sighted, confused or just plain wrong. But I'm a happy shareholder, and I have all the confidence in the world that, as far as the cloud computing market is concerned, IBM is going to spend 2010 in the passing lane.


Now I'm sure I've shared too much. I've made sure that I haven't shared any actual unit prices or costs with you, but still some functionary in a blue suit is probably going to give me a stern lecture. I'll nod along until he's out of breath. Ultimately, I won't get into any serious trouble due to this post for one reason:

I'm doing IBM a favor.

By telling you the metrics that IBM is tracking, I'm also suggesting to you that these are the measures you ought to track as well -- the ones that you should be comparing side-by-side as you sort through prospective vendors.

And I wouldn't be giving you a metric if IBM wasn't poised to beat everyone else in the game at it.

Have a better day,


PS: Sorry for the radio silence recently. I had to take some family leave, then came a very busy holiday season. I'll do better going forward.

Wednesday, August 26, 2009

Item from Amazon VPN

I get more comments on security than any other topic on this blog. So I thought I'd pass this on to you in case you haven't seen this before:

Amazon is rolling out a virtual private network for the EC2 computing offering and plans to extend it to the S3 storage offering.

Have a better day,


Tuesday, August 25, 2009

What's the difference between "cloud" and "clod"? (Answer: "U")

As part of my daily research, I subscribe to LinkedIn's Cloud Hosting & Service Providers Forum, where I saw an interesting post yesterday.

It was about the now-defunct, though highly successful in the short run, Car Allowance Rebate System. Let's be honest: Cash for Clunkers was at best pure, New Deal "pump priming". It didn't create a lot of demand, it didn't take a lot of gas guzzlers off the road, it didn't save the planet, it didn't even save people a lot of money on a new car purchase, but it did burn through inventory and turn the most rock-ribbed, flag-waving, staunch-Republican car dealers into grateful supplicants of a Democratic administration (less than a year after the GOP nationalized the banks, but I digress, still shaking my head in utter incomprehension.)

But forget about Cash for Clunkers as public policy. What does it mean to IT? Turns out, it actually does mean something, but you had to be looking for it. Rich Bruklis, a Hewlett-Packard storage product manager out of Houston, was. Putting aside the HP-IBM rivalry, I thought he had an excellent point about the difficulty car dealers have had actually getting the Washington cash to go with the Detroit clunkers:

"Would Cloud Computing help prevent the frustration of auto dealers and their delayed claims from Uncle Sam? ... [url address] ... Tuesday morning, I predict the summary from the media will highlight the US Gov't's inability to keep the web site up and process the last minute claims."

Well, we'll find out together tomorrow whether Mr. Bruklis's prediction is correct. I'll take this as a gentleman's bet, though; as a former journalist, I have a sense the media will largely miss this story. Reporters can appreciate the coolness of a new iPhone but, like everyone else, they are scared to death of explaining anything as complex and un-sexy as e-commerce infrastructure. (Don't get this MBA started at how fast they run away from a story that involves dollar signs and math.)

But what I got a huge chuckle out of was that, in the middle of Mr. Bruklis's post, was a link to a story in Cloud Computing Journal: Imagine my surprise when I clicked that link and went straight to a 404 screen. Tried again, and I got the site's frame but no text. One more time, and I finally got through to the article, which Mr. Bruklis wrote.

Despite the irony of an article lambasting the Transportation Department for IT bumbling being hosted on a site that's takes more time to load than a freight train, the piece is really worth the read.

Have a better day,


Wednesday, August 19, 2009

Back-to-school shopping

If you haven't read the UC Berkeley RAD Systems Lab paper, "Above the Clouds," it's well worth the effort.

Written in a conversational rather than academic tone, it discusses the technical, line-of-business, financial and historical drivers of cloud computing. It also authoritatively defines cloud computing: software-as-a-service plus utility computing. SaaS providers can be utility computing customers. The other Whatever-aaS models are not included, nor are private clouds.

It also defines three economic engines of the cloud phenomenon. I list the first two just to be thorough. We'll be discussing the third:
  1. Pay-as-you-go. The authors use the term "fine-grained" to describe the micro level at which capex is moved into opex.
  2. Hardware deflation. Processing, storage and network horsepower all constantly decline in unit cost, but at different rates; cloud providers can benefit from the "float" and, maybe even pass them on to you, the consumer.
  3. Elasticity of average and peak utilization. This old conundrum -- how to provision enough computing power for crunch time without drastically overpaying for 300 days out of the year -- is a step closer to solution in the cloud.
This third point is, of course, most crucial to startups and to web-based businesses that might have to dial back down considerably after the novelty wears off. But I venture to say there isn't a CIO in the world who isn't concerned about capacity management.

The Berkeley authors provide an interesting example:

"Target, the nation's second largest retailer, uses [Amazon Web Services] for the website. While other retailers had sever performance problems and intermittent unavailability on 'Black Friday' (November 28), Target's and Amazon's sites were just slower by about 50%."

This is all the more amazing because Amazon's EC2 offering is essentially just virtualization in the cloud, according to this same paper. "An EC2 instance looks much like physical hardware, and users can control nearly the entire software stack, from the kernel upwards. This low level makes it inherently difficult for Amazon to offer automatic scalability and failover." They suggest Google's AppEngine platform for that purpose. One wonders what's results would have been if it resided at Google.

But what I'm wondering is, what will the results be on this coming Black Friday, and the Black Friday after that? There's a first-mover advantage here: The capacity was indeed available, albeit at a degraded level. We've all been hearing about the slow back-to-school retail season, and how stores are dreading an equally dismal holiday season. They're going to cut back on expenses, and that's going to make the cloud particularly attractive to them. Nobody has bigger seasonal capacity requirements than retailers. That means there will be more direct competitors in the mix. Cloud providers are also struggling with elasticity. Will they be willing to buy all the capacity they need for 30 Shopping Days 'Til Christmas, even if most of it lies fallow for the rest of the year?

I don't think so. Expect outages this year, overcapacity the following year, and outages again the year after that.

Think I'll actually get reacquainted with the mall.

Either that or start shopping now.

Have a better day,


Tuesday, August 18, 2009

In case you missed the JVC story

Thanks to Jeff Schneider ...

JVC to Move to the Cloud: Will spend $27.4 million more or less (2.6 billion yen) so IBM can lift it into an outsourced cloud

I'm not just a happy IBM employee. I'm a happy IBM shareholder.

Have a better day,


(more substantive post coming soon)

Friday, August 14, 2009

Who pays? How much?

Bernard Golden had another interesting point in his "Skinny Straw" article, which I referenced this past week: Some applications are going to require more bandwidth than others due to the amount of data transfer required. I/O-intensive apps -- or I/O intensive sections of a given app -- may determine what you can put in a cloud and what has to be colocated close to home. As noted in an earlier post, bandwidth is the key bottleneck to cloud computing, as opposed to the memory constraints that typify traditional processing.

I think what you'll find is that the same apps that would be "problem children" for the cloud are precisely those that are costing you too much in the first place. That's right, the bane of IT delivery organizations everywhere: legacy systems.

This would be a good time to talk about chargeback. The cloud provides another way of showing how much cheaper it would be if all business units used the standards that IT prescribes.

Imagine the impact on the enterprise when all the business units that take advantage of cloud computing received monthly invoices that showed the number of dedicated ports, then a cost per port, then a single line for every passthrough charge, and finally an allocation of headquarters tax. The business unit executive or one of his direct-reports would be able to understand it in a minute. And it probably wouldn't change much month-to-month.

Contrast that with the business unit that persists in using legacy systems that do essentially the same thing. Their invoices show unhealthy detail of hardware, software, labor, floorspace, power and network consumption. Depending on workload requirements, these costs might well swing drastically up and down from one time period to the next. The exec might even have to hire a budget analyst to manage this invoice; that would certainly eat into any perceived savings from staying on a legacy system because of "organizational" reasons (i.e., his people are too change-resistant to stay current).

We will absolutely have more detail on how the cloud simplifies chargeback. I know for a fact that some of my IBM colleagues are working on this as we speak. I'll pick their brains and give you a preview.

Have a better weekend,