Wednesday, September 30, 2009

Experts on cost/benefit analysis: "Huh?"

I'm looking at a recently published deck from an organization called CIO Executive Board. (Despite its high-falutin' name, it's really just another consultancy.) I thought the study's title, "Rethinking IT Funding Models" would be of interest to readers of this blog.

But on the slide headlined, "Decision rules for enhancing cost/benefit transparency," they lost me. The whole presentation became, to me, an excuse to rail against all the fuzzy thinking that sidetracks any quantitative approach to understanding IT as a business. Basically, CIO Executive Board suggests you ask four questions:

  1. Do we understand the costs of the IT services we deliver?
  2. To what extent is the cost of a specific IT service variable in the near term (within one year)?
  3. Where can cost-transparency [sic] change behavior?
  4. What is the most efficient means of providing cost transparency?

Now, what's wrong with this picture? A lot.

First, It's a sad state of affairs if a CIO doesn't understand his or her service delivery costs. OK, so it's a sad state of affairs. But am I the only one bothered that this question even has to be asked? I don't accept the premise that this is a useful yes/no question. What might be more helpful would be, "What is it about the costs of the IT services we deliver that we don't understand? Maybe you understand servers and storage in intricate detail, but your network costs leave you stymied. Maybe you know to the penny how much you're spending in each domain, but you'd be hard-pressed to map it to the production and sales functions in your overall enterprise.

As for the second question, I like to assume that anyone I talk to whose title begins with the letter C knows the definition of "short term". I also like to think they know what "variable" means, but I'm not sure whoever wrote this slide ever had a job starting with C (unless maybe custodian, and not in the financial sense). Examples of variability drivers given in sub-bullets include tiering labor, rationalizing procurement, outsourcing and stretching refresh cycles. No, Charlie. That's not what "variable" means. These are things you can do to drive down your current year's costs, true. And I'm not saying we shouldn't be thinking about these things. But "variable" to me (and most other Accounting 101 veterans) suggests the amount of money accreted or decreted if the unit of workload changes. If you have to add another database instance, how much more will that cost you? Alternately, if you can drive down the number of database instances by 1, how much does that save you? There can be some stickiness -- it's possible that incremental costs of adding workload would be realized this year, but it might take until next year to realize the savings if that unit of workload went away. The beauty of cloud is that it helps move hitherto fixed costs into the variable realm, so you can then control costs by controlling your workload.

But that leads to another problem with the CIO Executive Board report: There's nothing in there about quantifying benefits. Nothing. At all. And you can't explore "cost/benefit transparency" unless you can suggest some visibility into the benefit side of the vinculum. (By the way, and this is just a nit: I hyphenate rather than slash "cost-benefit" because, if we were to be mathematically pure, we'd be calling it "benefit/cost". That's the ratio we're really trying to identify.)

The slide's last two questions, though, are spot on: How do we use our knowledge of costs to drive behavior, and how do we make sure these costs are comprehensible to the users? These are going to be huge issues as we move to the cloud. The provider that can meter services best -- and by "best" I mean translated into service catalog items to which line-of-business executives can relate -- will be the provider that succeeds.

To that end, I look forward to the CIO Conference Board's October 29 presentation on Business-Focused Metrics of IT Value. I trust it'll be more informative.

Have a better day,


PS: If you haven't seen it yet, please check out the IBM Global CIO Study. I'll be blogging on that later this week.

Tuesday, September 22, 2009

Bringing cloud metrics down to earth

Hmmm, I don't know how much of these numbers in front of me I'm allowed to share. A lot of it is IBM-confidential. But I'll try to walk the thin line.

It's public knowledge that IBM is a player in the cloud computing space. It's also public knowledge that IBM is not a huge player. And it's an easy guess that IBM wishes it had a bigger slice of the pie. Given all that, you could infer -- correctly -- that IBM is cooking up some ideas that it expects to vault it over the competition.

And for any more about that, you'll have to wait for the announcement.

Still ...

I can tell you to expect a cascade of new offerings through 2010 and 2011. Early days will focus on middleware, open source, support, security & compliance, storage and server virtualization. There's more, of course, but those are the first-quarter highlights.

I can also tell you that IBM expects to be able to compete on price in this arena, which is a departure for a company that's convinced it's the Tiffany's of computers. I like IBM hardware myself, but I'll be honest with you: The most appealing thing about this ThinkPad T60 I'm pounding away at right now is that Big Blue gives it to me for free.

So what kinds of costs are IBM keeping an eye on? In the platform-as-a-service world, IBM is focused on price per hour for a standard computing unit -- processing, memory and storage -- and internet costs per gigabyte, which will be different inbound or outbound.

IBM expects to capture the lead on these costs by:
  1. carefully selecting the right processors, not necessarily the most powerful;

  2. optimizing storage alternatives, for which customers' workloads will determine selection;

  3. improving energy efficiency; and

  4. paying close attention to network architecture.

Big Blue may be at a competitive disadvantage now, and not for organic reasons. IBM has the people, the hardware, the software, the network bandwidth and the facilities to make a major splash in cloud computing. The only disadvantages that IBM are that it is a) big and b) old. Laser focus and nimble thinking aren't net exports from Armonk. IBM was late to the game. But it's here now.

Expect new hardware to be committed and, if necessary invented. Expect IBM to eat its own cooking when it comes to data center solutions. Expect Tivoli and the rest of IBM Software Group to fuel the catch-up drive. And expect IBM to do what it does better than anyone else, and always has: Keep throwing more and more people at a problem until the right skills in the right quantities are found to solve it.

If you've ever met me personally, you know I'm not a rah-rah IBMer. I like working there. I like the processes we use to come up with solutions. But I'm not some human resources flack who doesn't believe the company is capable of being short-sighted, confused or just plain wrong. But I'm a happy shareholder, and I have all the confidence in the world that, as far as the cloud computing market is concerned, IBM is going to spend 2010 in the passing lane.


Now I'm sure I've shared too much. I've made sure that I haven't shared any actual unit prices or costs with you, but still some functionary in a blue suit is probably going to give me a stern lecture. I'll nod along until he's out of breath. Ultimately, I won't get into any serious trouble due to this post for one reason:

I'm doing IBM a favor.

By telling you the metrics that IBM is tracking, I'm also suggesting to you that these are the measures you ought to track as well -- the ones that you should be comparing side-by-side as you sort through prospective vendors.

And I wouldn't be giving you a metric if IBM wasn't poised to beat everyone else in the game at it.

Have a better day,


PS: Sorry for the radio silence recently. I had to take some family leave, then came a very busy holiday season. I'll do better going forward.

Wednesday, August 26, 2009

Item from Amazon VPN

I get more comments on security than any other topic on this blog. So I thought I'd pass this on to you in case you haven't seen this before:

Amazon is rolling out a virtual private network for the EC2 computing offering and plans to extend it to the S3 storage offering.

Have a better day,


Tuesday, August 25, 2009

What's the difference between "cloud" and "clod"? (Answer: "U")

As part of my daily research, I subscribe to LinkedIn's Cloud Hosting & Service Providers Forum, where I saw an interesting post yesterday.

It was about the now-defunct, though highly successful in the short run, Car Allowance Rebate System. Let's be honest: Cash for Clunkers was at best pure, New Deal "pump priming". It didn't create a lot of demand, it didn't take a lot of gas guzzlers off the road, it didn't save the planet, it didn't even save people a lot of money on a new car purchase, but it did burn through inventory and turn the most rock-ribbed, flag-waving, staunch-Republican car dealers into grateful supplicants of a Democratic administration (less than a year after the GOP nationalized the banks, but I digress, still shaking my head in utter incomprehension.)

But forget about Cash for Clunkers as public policy. What does it mean to IT? Turns out, it actually does mean something, but you had to be looking for it. Rich Bruklis, a Hewlett-Packard storage product manager out of Houston, was. Putting aside the HP-IBM rivalry, I thought he had an excellent point about the difficulty car dealers have had actually getting the Washington cash to go with the Detroit clunkers:

"Would Cloud Computing help prevent the frustration of auto dealers and their delayed claims from Uncle Sam? ... [url address] ... Tuesday morning, I predict the summary from the media will highlight the US Gov't's inability to keep the web site up and process the last minute claims."

Well, we'll find out together tomorrow whether Mr. Bruklis's prediction is correct. I'll take this as a gentleman's bet, though; as a former journalist, I have a sense the media will largely miss this story. Reporters can appreciate the coolness of a new iPhone but, like everyone else, they are scared to death of explaining anything as complex and un-sexy as e-commerce infrastructure. (Don't get this MBA started at how fast they run away from a story that involves dollar signs and math.)

But what I got a huge chuckle out of was that, in the middle of Mr. Bruklis's post, was a link to a story in Cloud Computing Journal: Imagine my surprise when I clicked that link and went straight to a 404 screen. Tried again, and I got the site's frame but no text. One more time, and I finally got through to the article, which Mr. Bruklis wrote.

Despite the irony of an article lambasting the Transportation Department for IT bumbling being hosted on a site that's takes more time to load than a freight train, the piece is really worth the read.

Have a better day,


Wednesday, August 19, 2009

Back-to-school shopping

If you haven't read the UC Berkeley RAD Systems Lab paper, "Above the Clouds," it's well worth the effort.

Written in a conversational rather than academic tone, it discusses the technical, line-of-business, financial and historical drivers of cloud computing. It also authoritatively defines cloud computing: software-as-a-service plus utility computing. SaaS providers can be utility computing customers. The other Whatever-aaS models are not included, nor are private clouds.

It also defines three economic engines of the cloud phenomenon. I list the first two just to be thorough. We'll be discussing the third:
  1. Pay-as-you-go. The authors use the term "fine-grained" to describe the micro level at which capex is moved into opex.
  2. Hardware deflation. Processing, storage and network horsepower all constantly decline in unit cost, but at different rates; cloud providers can benefit from the "float" and, maybe even pass them on to you, the consumer.
  3. Elasticity of average and peak utilization. This old conundrum -- how to provision enough computing power for crunch time without drastically overpaying for 300 days out of the year -- is a step closer to solution in the cloud.
This third point is, of course, most crucial to startups and to web-based businesses that might have to dial back down considerably after the novelty wears off. But I venture to say there isn't a CIO in the world who isn't concerned about capacity management.

The Berkeley authors provide an interesting example:

"Target, the nation's second largest retailer, uses [Amazon Web Services] for the website. While other retailers had sever performance problems and intermittent unavailability on 'Black Friday' (November 28), Target's and Amazon's sites were just slower by about 50%."

This is all the more amazing because Amazon's EC2 offering is essentially just virtualization in the cloud, according to this same paper. "An EC2 instance looks much like physical hardware, and users can control nearly the entire software stack, from the kernel upwards. This low level makes it inherently difficult for Amazon to offer automatic scalability and failover." They suggest Google's AppEngine platform for that purpose. One wonders what's results would have been if it resided at Google.

But what I'm wondering is, what will the results be on this coming Black Friday, and the Black Friday after that? There's a first-mover advantage here: The capacity was indeed available, albeit at a degraded level. We've all been hearing about the slow back-to-school retail season, and how stores are dreading an equally dismal holiday season. They're going to cut back on expenses, and that's going to make the cloud particularly attractive to them. Nobody has bigger seasonal capacity requirements than retailers. That means there will be more direct competitors in the mix. Cloud providers are also struggling with elasticity. Will they be willing to buy all the capacity they need for 30 Shopping Days 'Til Christmas, even if most of it lies fallow for the rest of the year?

I don't think so. Expect outages this year, overcapacity the following year, and outages again the year after that.

Think I'll actually get reacquainted with the mall.

Either that or start shopping now.

Have a better day,


Tuesday, August 18, 2009

In case you missed the JVC story

Thanks to Jeff Schneider ...

JVC to Move to the Cloud: Will spend $27.4 million more or less (2.6 billion yen) so IBM can lift it into an outsourced cloud

I'm not just a happy IBM employee. I'm a happy IBM shareholder.

Have a better day,


(more substantive post coming soon)

Friday, August 14, 2009

Who pays? How much?

Bernard Golden had another interesting point in his "Skinny Straw" article, which I referenced this past week: Some applications are going to require more bandwidth than others due to the amount of data transfer required. I/O-intensive apps -- or I/O intensive sections of a given app -- may determine what you can put in a cloud and what has to be colocated close to home. As noted in an earlier post, bandwidth is the key bottleneck to cloud computing, as opposed to the memory constraints that typify traditional processing.

I think what you'll find is that the same apps that would be "problem children" for the cloud are precisely those that are costing you too much in the first place. That's right, the bane of IT delivery organizations everywhere: legacy systems.

This would be a good time to talk about chargeback. The cloud provides another way of showing how much cheaper it would be if all business units used the standards that IT prescribes.

Imagine the impact on the enterprise when all the business units that take advantage of cloud computing received monthly invoices that showed the number of dedicated ports, then a cost per port, then a single line for every passthrough charge, and finally an allocation of headquarters tax. The business unit executive or one of his direct-reports would be able to understand it in a minute. And it probably wouldn't change much month-to-month.

Contrast that with the business unit that persists in using legacy systems that do essentially the same thing. Their invoices show unhealthy detail of hardware, software, labor, floorspace, power and network consumption. Depending on workload requirements, these costs might well swing drastically up and down from one time period to the next. The exec might even have to hire a budget analyst to manage this invoice; that would certainly eat into any perceived savings from staying on a legacy system because of "organizational" reasons (i.e., his people are too change-resistant to stay current).

We will absolutely have more detail on how the cloud simplifies chargeback. I know for a fact that some of my IBM colleagues are working on this as we speak. I'll pick their brains and give you a preview.

Have a better weekend,


Thursday, August 13, 2009

Quick reminder ...

If you haven't already, could you please take a quick look at the 31 July entry, "Scattered Clouds"?

I'd like to see how close we can get to mapping the entire cloud infrastructure, then continuously updating it.

Your comments will be crucial.

I expect to have a more substantive post tomorrow.

Have a better evening,


Tuesday, August 11, 2009

The cloud's "skinny straw": How to not be the sucker

My IBM colleague Mark May flagged this interesting article for me, by open source guru Bernard Golden for CIO magazine online:

The crux of Golden's argument is that cloud computing moves the IT bottleneck from memory tonetwork bandwidth. That is, as you migrate to a cloud solution, it becomes someone else's problem to refresh the hardware, so you'll always have up-to-date hardware that can effectively run all your applications (or else, presumably, you've got someone to sue). The bottleneck, then, moves to the network connecting your enterprise to the cloud infrastructure. Golden calls that network bottleneck, "the skinny straw".

I agree with Golden, but I see the implications a little differently. Here's my take on it (for his, follow the link above):

Assuming that the cloud provider's LAN is sufficient, you're facing two potential skinny straws. The first is the point-to-point line charges. You need to make sure that you've got all the bandwidth you need, and then some to allow for growth. Then you have to negotiate those line charges for all you can squeeze.

I once wrote a white paper describing how there is no linear function to describe how big a pipe you've got, how far it goes from one city to another, and how much you're paying. I wrote that paper in 2002, right after the collapse of Global Crossing, when the communications industry was overbuilt and in a state of panicky retreat. But you know what? I bet I'd reach the same conclusion if I did the same research today.

You should also make sure that the terms you negotiate are for the same length of time as your agreement with the cloud provider.

The other network cost to consider is the last-mile charge from the local hub to the cloud provider's back wall. If you can negotiate this directly with the network service, great. But more likely you'll have to negotiate with the cloud company. Do not let them lump this in with the rent; make sure that this cost is broken out. Do not pay for such sunk costs as actually digging the trenches and laying the cable. Pay only for the real cost of the ping for the month.

And here's an important rule of thumb: It should cost less -- much less -- to pay for one mile's worth of fiber than for the thousand miles of fiber you're riding between cities. Does that sound too simplistic? Do me a favor: Call the budget analyst responsible for your WAN and see what the ratio is now.

Then get back to me. I might be the only one in the world interested in the answer -- but I am that.

Have a better day,


Friday, August 7, 2009

What gets asked, and what gets done

I came across a fascinating blog today ...

Srini Kumar asks, "Do you know what your CEO wants?"

Don't assume you do. Kumar enumerates five different points of departure between what CEOs are after and what their technology lieutenants pursue. Maybe that's why CIO is rumored to stand for "Career Is Over". (I know a guy who was offered a chance to combine the CIO role with the CTO role. The new position would be called Chief Information and Applications Officer. He declined to take it after I pointed out that his job title would be "CIAO".)

At least two of Kumar's points resonate here:

1) "Taking years building frameworks or standards which gets outdated ... ." I recently did a business case for a customer who wanted to compare a buy-versus-build-versus-outsource decision. But they wanted to focus on the annual operating cost deltas, to the exclusion of analyzing the time-to-go-live. That's what they wanted so that's what I delivered, but it would have been my preference to make some educated guesses about how soon each of these options could be up and running. The more you capture this, the better cloud computing looks.

2) "Jump onto bleeding-edge solutions where there is no need and no expertise." Another argument in favor of concentrating knowledge in dedicated cloud facilities. Most companies simply can't keep up and waste their time, talent and strategic focus if they try.

Kumar's day job is as the Java chief at offshorer Satyam. His solutions are simple. Kumar is a strong proponent of Software as a Service, off-the-shelf apps, going outside for expertise, and using technology to simplify rather than complicate.

Of course, if this approach was always the least expensive and least risky option, everyone would be doing it. Still, I'm in broad agreement here and wanted to share these thoughts with you.

Have a better day,


Friday, July 31, 2009

Scattered clouds

So where is this "cloud" physically ... in meat space.

I went to find out. First, I started with a very helpful grid courtesy of John L. Willis's IT Management and Cloud Blog at That provided the names of the known vendors and what they provide.

The next question was, Where were they providing these services? That is, where were the actual data centers located? I got that from a combination of company web sites, regulatory filings, my own industry experience and a fantastic online resource called Data Center Knowledge at

Here's what I came up with ...

Now, I don't guarantee that this is 100% correct. It's just a first pass. I'd appreciate any comments or corrections.
Have a better day,

Thursday, July 30, 2009

Brain cloud

In a comment on an earlier entry, a reader who wishes to be known only as Alec referred to the "knowledge concentration" that a third-party provider brings. He was talking about the ASP model, but acknowledges this holds for cloud providers as well.

This knowledge concentration means that deployment "requires about 10% less effort than a customer-hosted one of the same size and complexity," according to Alec, whose statement also suggests this might hold for incident management.

But I think there's even more to it than that.

If your labor is 10% more efficient, then that's time that they can be spending on value-add projects rather than keeping the lights on. Most business cases I've seen would grant that this is a 10% savings, but that would only be true if your cloud provider's cost per person-hour were the same as your badged employees'. But you're not fishing from the same pool. If your data center is in a rural or semi-rural area where skilled workers are hard to find, then you're paying a premium for them. If you're in a major metro, you're paying a premium just on the burden rate -- and then you can consider higher salaries. But cloud infrastructure is in places with affordable labor markets where there's a concentration of IT skills. So that adds to the bargain.

And we're not just talking about the tape monkeys and board jockeys here. The higher up you go up the skills ladder, the better the payoff is likely to be. Unix admins? LAN admins? Hypervisor gurus?

Also, if your cloud's team is 10% more efficient than your home-grown team, that means that your systems are back up and running 10% faster. What's that worth in terms of productivity? Customer satisfaction? Revenue? (By the way, don't count revenue in a business case. It's misleading as all get-out. But I think it's fair to include EBIT or EBITDA, depending on whether you're presenting a cash- or accrual-basis case, respectively.)

One last point about cloud labor costs versus in-house: Growth. What are you projecting for wage inflation next year -- 3%? 3.5%? Whatever it is, it's just a projection. You really don't know. You'd be remiss not to add a risk factor to that. Depending on the size of your shop, three or four longstanding employees who know where all the bodies are buried could blow that estimate straight out of the water.

And then what are you paying them to do? You're paying them to deliver an adequate standard of service. Adequacy is defined by a service level agreement between you and ... uh ... you. There's no real penalty for violating it, except that you get an earful from the end users.

But with a cloud provider, you have a contract. You know exactly how much it'll cost next year to maintain a specific service level. If that SLA is violated, you get an agreed-to rebate. Otherwise, you know what you're paying and know what you're getting.

Wednesday, July 22, 2009

Lost on the Op-Ed page

First, sorry for the disappearing act. I was given the brief to start this blog in my spare time, then they took away my spare time. I've got the work-life balance to post at least every few days now going forward.

But I'm not going to blog just to blog. One thing that prompted me out of remission was an op-ed piece in the New York Times by Harvard Law's Jonathan Zittrain. As with many Ivy types, he makes some remarks that are obvious given a moment's thought, are expressed very well, and not quantified worth a damn.

Zittrain's thesis is that the cloud is a dangerous place. It's great to have all your data backed up offsite but -- and this is why we need Harvard on the job, to think of these things for us -- What If Something Goes Wrong?

The infrastructure could fail. The keepers of that infrastructure might even betray your confidential information. Right to privacy -- a dubious enough concept in the real world -- is practically non-existent online. Under the Patriot Act, the government can grab your data without a warrant just as easily as it could tap your phone. And then heaven help you if you're actually sending packets outside U.S. borders!

All good points, Professor Zittrain. The op-ed piece was directed at a general readership (although most Times subscribers would probably bristle at that characterization) and was focused more on personal computing. So it's not surprising that they're nothing that any decent CIO hasn't already thought of.

The questions, then, are how real are these risks? How can they be mitigated? And most important: How much could they cost you?

Real? Sure they're real. System failure is definitely real. Industrial espionage is a possibility. Beyond that, maybe we're just descending into paranoia.

Zittrain suggests some public policy solutions to mitigate: Fair practices law could compel cloud providers to send your data back to you upon one-click request and delete it from their own devices. Other privacy protection statutes could be enacted. And of course cloud customers can take matters into their own hands by improving their encryption and deploying other security options.

At what cost, then? Legislation is expensive, but doesn't tend to hit the CIO's p&l statement. Industry groups have lobbying firms on retainer; it may be time for industry groups to put Zittrain's public policy initiatives on the front burner. Security can be costly; I've had clients whose firewall servers consisted of $50,000/year of software stacked on $5,000 (one-time) worth of hardware. But that just reminds me of what's been written on bumper stickers about school district taxes: "If you think education is expensive, try ignorance."

Zittrain hits on one critical hidden cost of the cloud, and on this point I think he's quite right and actually displays the kind of foresight that Harvard people are supposed to display on a regular basis: The cloud could shackle innovation.

"Both the most difficult challenge -- both to grast and to solve -- of the cloud is its effect on our freedom to innovate," Zittrain writes. "The crucial legacy of the personal computer is that anyone can write code for it and give or sell that code to you -- and the vendors of the PC and its operating system have no more say about it than your phone company does about which answering machine you decide to buy."

(Answering machine? They still sell those?)

The point, again directed at the personal computing public, is well taken in the corporate world. If you have people on your team who love to tinker and are good at it, the cloud will put opportunities out of their reach.

They won't be able to write spaghetti code. They won't be able to forget to tell anyone about it and never enter their changes into the CMDB. They won't be able to cause outages just by going on vacation. They won't be able to negotiate outrageous raises because they're the only ones who understand the "improvements" they made. They won't be able to retire at 39 and come back as $400/hour consultants at 40.

Instead, such monkeying around can only be done by people who do the same system administration and operation tasks day in and day out for a variety of customers with similar requirements, applying their professionalism and knowledge concentration seemlessly and invisibly.

Hmm, maybe the standardization benefit outweighs the innovation cost.

Wednesday, June 24, 2009

Could cloud take out one-third of your processing costs?

I make you two promises.

First, this blog will have more dollar signs and numbers with commas in them than any other blog about cloud computing. This space is all about the business end of cloud. I leave the technological discussions to others who can do them more justice.

Second, even as we discuss the numbers, we won't get bogged down in them. My goal is to help you frame the justification for earnings-accretive cloud projects, and maybe even find the flaws in the business cases propping up ill-conceived proposals. So we'll show the numbers, we'll discuss the numbers, but we'll keep it at the strategic level. This blog does nobody any good if it's too dense.

So how do I come up with cloud being able to take out more than a third of your processing costs?

As an industry average, let's say that 40% of your infrastructure directly supports your test, development, and other pre-production enviornments. (We can quibble about the precision of this number; one survey of self-reporting companies might report higher, another lower. But 30%-50% is the range I've seen in print and, after taking a few minutes to go through my old customer files, I can validate that based on my own experience.)

Next, let's say that these servers are 10% utilized. Here, I think we're being generous. This is a key business driver for cloud: that you only need to pay for the horsepower you're using, so you're by definition 100% utilized. Even in a virtualized domain, you're always going to have that "white space" or "headroom" requirement. You'll also have load balancing issues and a hypervisor layer adding extra complexity (read: labor costs) to your software stack. In the cloud, these latency and complexity issues are spread around to the point of being negligible to the individual firm.

Back to the math: The difference between 10% and 100% utilization is 90%. And 90% of 40% is 36%, or more than one-third.

To put it in dollar terms, if you're spending $10 million/year on server depreciation, server maintenance, operating system support, middleware support, sys admin labor, floorspace and the 3Ps (power, pipe and ping), then $4 million of that supports your pre-production. Of that, 90% is essentially wasted due to inefficiency. With the efficiency promised by cloud, you'd only have to spend 10% of that $4 million, or $400,000. That means you'd save $3.6 million -- or 36% of your $10 million budget.

And that's enough math for now.

Of course, we're assuming a perfect-world solution. No cloud will be perfectly efficient. And let's remember that this is a business, and your provider is going to want to negotiate: "Hey, we can save you $3.6 million -- we'll charge you $2 million for the service and you'll still come out ahead."

And some costs aren't going away. The data center isn't going to shrink just because you got rid of a few servers, so the rent or depreciation on the building -- not to mention the taxes, insurance, contractors and critical systems maintenance that are part and parcel -- aren't going anywhere. (You should save some on utilities.) Some machines, due to regulation or intramural politics or whatever other machines, will need to be kept in-house. And The developers and DBAs on the application labor are just as much a fixture of the data center as the front door. At some point, IT will become such a commodity that your whole ERP system could fit on a machine the size of the ThinkPad I'm writing on now. When that day comes, your entire apps team will still be showing up Monday morning and expecting to be paid on Friday.

We haven't even considered the cost to implement. True, there's no capital expenditure, but the providers might come up with some "initiation fee" that could drop on you like a six-digit quantity of bricks. There's the depreciation writeoff. And there are the costs associated with resource actions should you, after an organizational assessment, determine that enough of the sys admin and machine operator workload has evaporated into the cloud. The cost part of the cost-benefit analysis needs to be well understood before greenlighting any project -- cloud plays being no exceptions.

Still, the benefits are there to be had. To be more granular, IBM's cloud CTO Kristof Kloeckner estimates that cloud can save you:

- 73% on utilities for end-user computing,
- 40% on support for end-user computing,
- 50%-75% on hardware capex and software licensing, and
- 30%-50% on labor (largely by reducing re-work stemming from config and modeling errors).

For more on IBM's value proposition, click:

Have a better day,


Wednesday, June 17, 2009

Define quote-unquote success

So you just signed a statement of work hiring contractors to help you transform from the parochial, siloed organization you are to the lean, plugged-in, "cloud" organization you want to be. How will you know when the job's done? And once it is, how will you know if you succeeded or failed?

There are three steps you need to take verify that your company has benefited from implementing a cloud solution.

First, identify key performance indicators. KPIs can include such puddle-deep thoughts as "Reduce IT infrastructure costs," "Improve operating costs," "Improve business process efficiency," or "Improve customer service and satisfaction." The trick is to get away from the warm-and-fuzzy and into hard numbers.

The second step is to capture metrics that support these KPIs. The surest ways to "Reduce IT infrastructure costs " are, of course, to reduce the number of servers and the number of people. Each box and each belly-button has an incremental cost. Do you know what those costs are? That's where it all falls apart, you see. Not a lot of gthe IT departments I've worked with excel at determining how much they'll save by taking one server off the floor. They do tend to understand the savings of taking back a system administrator's badge, but at some point you run out of people who actually know something about computers. A person who gets the same amount of money deposited in her checking account twice a month is easy to understand from a cost perspective. But what about that incremental server? How much does that standard hardware build cost? How much does that standard software stack cost? Oh, you don't have hardware or software standards? Or you do, but you don't understand how to burden the network or storage components? Or you're not sure how to distinguish between physical and logical servers? Or you're not sure how the software is licensed? If you have any of these problems, I'd recommend gaining a clearer understanding of your costs before you proceed with cloud computing projects or any other supposed cost savers, or you'll never know for sure if you've made good decisions.

The last step is the investment analysis. I don't like the term "ROI". I've got an MBA with a finance specialization and I'm not sure what it means so, I'm here to tell you, the sales rep from your vendor doesn't have a clue. A colleague of mine from IBM's IT Business Management community of practice tells me that there are at least 23 accepted formulas for "return on investment". (The one I mean when I use the term is also called "return on invested capital," and applies more to corporate financial reporting than to anything I ever found in a data center.) So what do your decision makers mean by ROI? Net present value? Internal rate of return? Payback period? What's your discount rate? What's your hurdle rate? Again, if you don't have a handle on these, good luck getting anything -- cloud or otherwise -- greenlighted. I assure you, the number-crunchers who work for the lines of business know this stuff inside-out and will have a much easier time justifying their pet projects to the CFO than you will.

Whether you use ITIL or CoBIT or PRM-IT or whatever process mapping system you choose, there's a big block that deals with how well you understand your costs. If your capabilities in this area are not as mature as they should be, this could be a huge barrier to your ambitions to grow into the cloud.

For more on this, click

Have a better day,

Bill Freedman

Kicking off

Welcome, and thanks for joining the discussion.

To briefly introduce myself, I work for IBM as a business management consultant. I handle a lot of the number-crunching, CFO-facing stuff. Basically, I'm called upon to do three things:

1. work directly with clients to develop business cases, chargeback models, and other stuff that involves working with spreadsheets (i.e., the junk that everyone else in IT spend their careers avoiding);
2. develop intellectual capital as part of IBM's Global Deployment Center; and
3. lead the global core team for the roughly 1,000 IBMers worldwide who comprise the IT Business Management community of practice.

Because of my misspent past -- my first career was in journalism -- the PWGPMTM (people who get paid more than me) asked me to start a blog about where business management processes intersect with the new "cloud" approach. Since they're not actually paying me to do it, though, and this is being done on my own time, they're going to have to live with the risk of Freedman being a loose cannon. I will make the bosses this one assurance: I won't write anything that will reflect negatively on IBM customers; if I have to cite a real-world example, I'll do everything I can to mask who it is I'm talking about.

So let's take a look at this intersection.

As for business management, I know exactly what that is: governance, alignment, costing, charging. I don't know too many companies that are doing it correctly, but I know what it is. (My experience is skewed, of course. The companies that do have a handle on it wouldn't be hiring consultants to help them, right?)

As for cloud, I don't know anything about it. Here's what one IBM document has to say about it:

Cloud is a synergistic fusion which accelerates business value across a wide variety of domains.


Here's what I think it is: turning fixed costs into variable costs.

This has been around forever. "Cloud" is just a way of marketing it. I hope it works this time.

We used to call it "on-demand". We used to call it "the utility model". We still call some of it "application service provider". But it's all the same thing. Rather than buy all the components you need for your hardware, then construct a building that can provide all the power, pipe and ping you need to run those machines, then roll your own applications, you pay someone else to handle it for you.

Metaphorically: You don't own the utility grid anymore. Just the light switch.

This is not a new idea. Don't get me wrong, I'm not dismissive about it at all. I think it's great if we can get to it and I dedicate this blog to those of you who, through your comments and links, collaborate with me in making this space the home for all who'll be building business cases to support the cloud model at your own companies.

For an overview of what IBM is doing, click here:

If you have any comments on what you see there, or here, please click the "Comment" button and let's talk it out.

I look forward to a fascinating conversation with you.

Warmest regards,

Bill Freedman