Wednesday, August 26, 2009

Item from Amazon VPN

I get more comments on security than any other topic on this blog. So I thought I'd pass this on to you in case you haven't seen this before:

Amazon is rolling out a virtual private network for the EC2 computing offering and plans to extend it to the S3 storage offering.

Have a better day,


Tuesday, August 25, 2009

What's the difference between "cloud" and "clod"? (Answer: "U")

As part of my daily research, I subscribe to LinkedIn's Cloud Hosting & Service Providers Forum, where I saw an interesting post yesterday.

It was about the now-defunct, though highly successful in the short run, Car Allowance Rebate System. Let's be honest: Cash for Clunkers was at best pure, New Deal "pump priming". It didn't create a lot of demand, it didn't take a lot of gas guzzlers off the road, it didn't save the planet, it didn't even save people a lot of money on a new car purchase, but it did burn through inventory and turn the most rock-ribbed, flag-waving, staunch-Republican car dealers into grateful supplicants of a Democratic administration (less than a year after the GOP nationalized the banks, but I digress, still shaking my head in utter incomprehension.)

But forget about Cash for Clunkers as public policy. What does it mean to IT? Turns out, it actually does mean something, but you had to be looking for it. Rich Bruklis, a Hewlett-Packard storage product manager out of Houston, was. Putting aside the HP-IBM rivalry, I thought he had an excellent point about the difficulty car dealers have had actually getting the Washington cash to go with the Detroit clunkers:

"Would Cloud Computing help prevent the frustration of auto dealers and their delayed claims from Uncle Sam? ... [url address] ... Tuesday morning, I predict the summary from the media will highlight the US Gov't's inability to keep the web site up and process the last minute claims."

Well, we'll find out together tomorrow whether Mr. Bruklis's prediction is correct. I'll take this as a gentleman's bet, though; as a former journalist, I have a sense the media will largely miss this story. Reporters can appreciate the coolness of a new iPhone but, like everyone else, they are scared to death of explaining anything as complex and un-sexy as e-commerce infrastructure. (Don't get this MBA started at how fast they run away from a story that involves dollar signs and math.)

But what I got a huge chuckle out of was that, in the middle of Mr. Bruklis's post, was a link to a story in Cloud Computing Journal: Imagine my surprise when I clicked that link and went straight to a 404 screen. Tried again, and I got the site's frame but no text. One more time, and I finally got through to the article, which Mr. Bruklis wrote.

Despite the irony of an article lambasting the Transportation Department for IT bumbling being hosted on a site that's takes more time to load than a freight train, the piece is really worth the read.

Have a better day,


Wednesday, August 19, 2009

Back-to-school shopping

If you haven't read the UC Berkeley RAD Systems Lab paper, "Above the Clouds," it's well worth the effort.

Written in a conversational rather than academic tone, it discusses the technical, line-of-business, financial and historical drivers of cloud computing. It also authoritatively defines cloud computing: software-as-a-service plus utility computing. SaaS providers can be utility computing customers. The other Whatever-aaS models are not included, nor are private clouds.

It also defines three economic engines of the cloud phenomenon. I list the first two just to be thorough. We'll be discussing the third:
  1. Pay-as-you-go. The authors use the term "fine-grained" to describe the micro level at which capex is moved into opex.
  2. Hardware deflation. Processing, storage and network horsepower all constantly decline in unit cost, but at different rates; cloud providers can benefit from the "float" and, maybe even pass them on to you, the consumer.
  3. Elasticity of average and peak utilization. This old conundrum -- how to provision enough computing power for crunch time without drastically overpaying for 300 days out of the year -- is a step closer to solution in the cloud.
This third point is, of course, most crucial to startups and to web-based businesses that might have to dial back down considerably after the novelty wears off. But I venture to say there isn't a CIO in the world who isn't concerned about capacity management.

The Berkeley authors provide an interesting example:

"Target, the nation's second largest retailer, uses [Amazon Web Services] for the website. While other retailers had sever performance problems and intermittent unavailability on 'Black Friday' (November 28), Target's and Amazon's sites were just slower by about 50%."

This is all the more amazing because Amazon's EC2 offering is essentially just virtualization in the cloud, according to this same paper. "An EC2 instance looks much like physical hardware, and users can control nearly the entire software stack, from the kernel upwards. This low level makes it inherently difficult for Amazon to offer automatic scalability and failover." They suggest Google's AppEngine platform for that purpose. One wonders what's results would have been if it resided at Google.

But what I'm wondering is, what will the results be on this coming Black Friday, and the Black Friday after that? There's a first-mover advantage here: The capacity was indeed available, albeit at a degraded level. We've all been hearing about the slow back-to-school retail season, and how stores are dreading an equally dismal holiday season. They're going to cut back on expenses, and that's going to make the cloud particularly attractive to them. Nobody has bigger seasonal capacity requirements than retailers. That means there will be more direct competitors in the mix. Cloud providers are also struggling with elasticity. Will they be willing to buy all the capacity they need for 30 Shopping Days 'Til Christmas, even if most of it lies fallow for the rest of the year?

I don't think so. Expect outages this year, overcapacity the following year, and outages again the year after that.

Think I'll actually get reacquainted with the mall.

Either that or start shopping now.

Have a better day,


Tuesday, August 18, 2009

In case you missed the JVC story

Thanks to Jeff Schneider ...

JVC to Move to the Cloud: Will spend $27.4 million more or less (2.6 billion yen) so IBM can lift it into an outsourced cloud

I'm not just a happy IBM employee. I'm a happy IBM shareholder.

Have a better day,


(more substantive post coming soon)

Friday, August 14, 2009

Who pays? How much?

Bernard Golden had another interesting point in his "Skinny Straw" article, which I referenced this past week: Some applications are going to require more bandwidth than others due to the amount of data transfer required. I/O-intensive apps -- or I/O intensive sections of a given app -- may determine what you can put in a cloud and what has to be colocated close to home. As noted in an earlier post, bandwidth is the key bottleneck to cloud computing, as opposed to the memory constraints that typify traditional processing.

I think what you'll find is that the same apps that would be "problem children" for the cloud are precisely those that are costing you too much in the first place. That's right, the bane of IT delivery organizations everywhere: legacy systems.

This would be a good time to talk about chargeback. The cloud provides another way of showing how much cheaper it would be if all business units used the standards that IT prescribes.

Imagine the impact on the enterprise when all the business units that take advantage of cloud computing received monthly invoices that showed the number of dedicated ports, then a cost per port, then a single line for every passthrough charge, and finally an allocation of headquarters tax. The business unit executive or one of his direct-reports would be able to understand it in a minute. And it probably wouldn't change much month-to-month.

Contrast that with the business unit that persists in using legacy systems that do essentially the same thing. Their invoices show unhealthy detail of hardware, software, labor, floorspace, power and network consumption. Depending on workload requirements, these costs might well swing drastically up and down from one time period to the next. The exec might even have to hire a budget analyst to manage this invoice; that would certainly eat into any perceived savings from staying on a legacy system because of "organizational" reasons (i.e., his people are too change-resistant to stay current).

We will absolutely have more detail on how the cloud simplifies chargeback. I know for a fact that some of my IBM colleagues are working on this as we speak. I'll pick their brains and give you a preview.

Have a better weekend,


Thursday, August 13, 2009

Quick reminder ...

If you haven't already, could you please take a quick look at the 31 July entry, "Scattered Clouds"?

I'd like to see how close we can get to mapping the entire cloud infrastructure, then continuously updating it.

Your comments will be crucial.

I expect to have a more substantive post tomorrow.

Have a better evening,


Tuesday, August 11, 2009

The cloud's "skinny straw": How to not be the sucker

My IBM colleague Mark May flagged this interesting article for me, by open source guru Bernard Golden for CIO magazine online:

The crux of Golden's argument is that cloud computing moves the IT bottleneck from memory tonetwork bandwidth. That is, as you migrate to a cloud solution, it becomes someone else's problem to refresh the hardware, so you'll always have up-to-date hardware that can effectively run all your applications (or else, presumably, you've got someone to sue). The bottleneck, then, moves to the network connecting your enterprise to the cloud infrastructure. Golden calls that network bottleneck, "the skinny straw".

I agree with Golden, but I see the implications a little differently. Here's my take on it (for his, follow the link above):

Assuming that the cloud provider's LAN is sufficient, you're facing two potential skinny straws. The first is the point-to-point line charges. You need to make sure that you've got all the bandwidth you need, and then some to allow for growth. Then you have to negotiate those line charges for all you can squeeze.

I once wrote a white paper describing how there is no linear function to describe how big a pipe you've got, how far it goes from one city to another, and how much you're paying. I wrote that paper in 2002, right after the collapse of Global Crossing, when the communications industry was overbuilt and in a state of panicky retreat. But you know what? I bet I'd reach the same conclusion if I did the same research today.

You should also make sure that the terms you negotiate are for the same length of time as your agreement with the cloud provider.

The other network cost to consider is the last-mile charge from the local hub to the cloud provider's back wall. If you can negotiate this directly with the network service, great. But more likely you'll have to negotiate with the cloud company. Do not let them lump this in with the rent; make sure that this cost is broken out. Do not pay for such sunk costs as actually digging the trenches and laying the cable. Pay only for the real cost of the ping for the month.

And here's an important rule of thumb: It should cost less -- much less -- to pay for one mile's worth of fiber than for the thousand miles of fiber you're riding between cities. Does that sound too simplistic? Do me a favor: Call the budget analyst responsible for your WAN and see what the ratio is now.

Then get back to me. I might be the only one in the world interested in the answer -- but I am that.

Have a better day,


Friday, August 7, 2009

What gets asked, and what gets done

I came across a fascinating blog today ...

Srini Kumar asks, "Do you know what your CEO wants?"

Don't assume you do. Kumar enumerates five different points of departure between what CEOs are after and what their technology lieutenants pursue. Maybe that's why CIO is rumored to stand for "Career Is Over". (I know a guy who was offered a chance to combine the CIO role with the CTO role. The new position would be called Chief Information and Applications Officer. He declined to take it after I pointed out that his job title would be "CIAO".)

At least two of Kumar's points resonate here:

1) "Taking years building frameworks or standards which gets outdated ... ." I recently did a business case for a customer who wanted to compare a buy-versus-build-versus-outsource decision. But they wanted to focus on the annual operating cost deltas, to the exclusion of analyzing the time-to-go-live. That's what they wanted so that's what I delivered, but it would have been my preference to make some educated guesses about how soon each of these options could be up and running. The more you capture this, the better cloud computing looks.

2) "Jump onto bleeding-edge solutions where there is no need and no expertise." Another argument in favor of concentrating knowledge in dedicated cloud facilities. Most companies simply can't keep up and waste their time, talent and strategic focus if they try.

Kumar's day job is as the Java chief at offshorer Satyam. His solutions are simple. Kumar is a strong proponent of Software as a Service, off-the-shelf apps, going outside for expertise, and using technology to simplify rather than complicate.

Of course, if this approach was always the least expensive and least risky option, everyone would be doing it. Still, I'm in broad agreement here and wanted to share these thoughts with you.

Have a better day,