Custom Search
Sunday, October 25, 2009
Economics always guide adoption

In her recent article, "Study Says Economics Not A Driving Factor in Cloud Computing Adoption", Lori MacVittie reports on several studies that show that the current macroeconomic climate is not having a big impact on cloud computing adoption. In particular, the cost-savings purported to be offered by the cloud do not seem to be the major selling point. And yet, companies are adopting cloud computing technologies left and right, so it must make business sense: in other words, economics are driving adoption.

Many analysts look solely at virtualization and elastic computing services like Amazon's EC2 as the major technology at play in the infrastructure-as-a-service market. While certainly it takes less time to spin up a virtual machine image in the cloud than it does to rack and install actual hardware, this is probably not the major contributor for cost savings; at even a modest enterprise size, it's still cheaper to run the servers yourself. Indeed, even with virtual hosts, you must still manage capacity, allocate applications to servers, and monitor application health, meaning your technical operations staff is still very much needed.

Overlooked in many ways here are what I would call application building blocks available in the cloud, like Amazon's Simple Storage Service (S3), Simple Queueing Service (SQS), and SimpleDB. Google App Engine has its BigTable-based datastore, which with a modicum of effort can be exposed as a GData-style data webservice. In all of these cases, these application building blocks offer:

  1. very simple programming interfaces;
  2. automatic distribution of data across data centers;
  3. easy scalability; and
  4. understandable cost models
These capabilities would require a significant amount of effort for an enterprise to develop for itself. For example, I estimate developing an internal, enterprise-class S3 service might require a half million dollar investment in labor and servers, assuming you already had multiple data centers in place and could leverage existing open source software like Cassandra, Project Voldemort, or Dynomite.

Instead, time-to-market pressures are the real forces behind cloud computing adoption in the enterprise. These application building block services allow for rapid prototyping of new functionality, where the production architecture and the development architecture are the same. Where business agility to react to new opportunities is crucial, especially in the ever-changing Internet landscape, the ability to rapidly bring features to market, test their adoption, and then evolve is the key to long term competitiveness. Companies that cannot roll out new consumer value out quickly will find themselves left behind by more agile competitors. As the competition adopts cloud computing and gains time-to-market advantages, companies are forced to follow suit.

Cloud computing adoption is always about economics.

Monday, July 6, 2009
Cloud Confusion Amongst IT Professionals
[Editor's note: This is a guest article by Liz Ebbrell from Version One, Ltd., a provider of electronic document management software.]

The findings of a survey by document management software company, Version One, has revealed that 41% of senior IT professionals admit that they "don't know" what cloud computing is. Version One carried out the research with 60 senior IT professionals (IT directors and managers) across a range of UK public and private sector organisations. This research follows-on from a similar survey carried-out by Version One which highlights that two-thirds of UK senior finance professionals (finance directors and managers) are confused about cloud computing.

Of the remaining 59% of IT professionals who profess to know what cloud computing is, 17% of these understand cloud computing to be internet-based computing while 11% believe it is a combination of internet-based computing, software as a service (SAAS), software on demand, an outsourced or managed service and a hosted software service. The remaining respondents understand cloud computing to be a mixture of the above.

Despite cloud computing being in the media spotlight, only a minority of respondents (5%) say that they use it "a lot" and less than a quarter of those surveyed (19%) reveal that they only use cloud computing sparingly. Almost half of respondents (47%) admit that their company doesn’t use cloud computing with the remaining 29% conceding that they "don't know" whether their organisation uses it or not.

Julian Buck, General Manager of Version One, says, "Although this is only a small survey of IT professionals, the results are nonetheless very alarming, especially as IT professionals are the very people that need to understand cloud computing so that they can explain its benefits to management."

Buck continues, "It is clear from the survey results that there are a number of contrasting views as to what cloud computing really is, which is hardly surprising in light of the many different cloud computing definitions in the public arena. For instance, Wikipedia defines it as 'Internet-based computing' while Gartner refers to it 'as a service' using Internet technologies. IT expert, John Willis, writing in his cloud blog says that 'virtualisation is the secret sauce of a cloud' and provides different levels of cloud computing. With so many definitions circulating, clarity is urgently needed."

Only 2% of respondents say that their company is "definitely" going to invest in cloud computing within the next twelve months whilst 30% state that their organisations "may" invest in this technology. 45% admit that they "don’t know" whether their organisations will be investing in it or not with the remaining 23% stating that they currently have no investment plans. For those who definitely or maybe have plans to invest in cloud computing, some of the key business drivers cited include reduction in overheads and paper, ease of use, cost savings and the ability to provide collaborative tools for teaching and learning.

Buck adds, "If organisations are going to embrace cloud computing in the future it's essential that a single, simplified explanation is adopted by everyone. Failure to cut through the confusion could result in organisations rejecting this technology and missing out on the benefits it provides."

--Liz Ebbrell, Version One, Ltd.

Tuesday, February 10, 2009
Operational Cost Transparency

Cloud computing providers have gained a lot of attention based on their ability to provide massive economies of scale in server deployment; however, their pay-as-you-go billing methods (e.g. Amazon EC2 and S3, Google App Engine) actually provide something of far more strategic value to a business: operational cost transparency.

This actually works best for web sites and web services built according to RESTful design principles: namely, "everything is a resource" (i.e. everything gets a URL) and "use the standard HTTP methods", which basically boils down to: your HTTP access logs contain pretty much all the information needed to understand what's going on under the hood of your application in production. If you are clever, you can develop individual features or products to have their own sets of URLs--this lets you vertically partition your services according to URL.

Normally, vertical partitioning is a way to provide scalability for your application, so that you can easily allocate hardware resources to different aspects of the application independently. Doing this at the URL level by lumping related functionality under a common URL prefix lets you do layer 7 load balancing across a set of hardware. If you have a sufficiently virtualized data center (perhaps rented from a cloud computing vendor like Amazon Web Services), then you can actually allocate servers on a per feature basis.

If you couple this with a well-provisioned web analytics system like Omniture or Google Analytics, you can now map individual feature usage to actual resource usage (in terms of HTTP request traffic). If you then couple this with the usage-based billing from your cloud computing vendor, you get a very realistic ROI picture of your application's features. This provides valuable feedback on your revenue forecasting process for new development, but also lets you identify losing features that can be successfully cut (particularly if they add significantly more to system complexity or operational cost than they do to overall ROI).

For example, consider these two features:

Feature% revenue contributed% operating costs
A90%50%
B10%50%

If you simply removed feature B, your overall profitability nearly doubles (up 80%)! You'll take a small (10%) hit to revenue, but you'll free up 50% of your operating budget for a new venture. In an economic recession where new or additional funding is scarce, this may be the major avenue you have available for funding new products. At the very least, the cost transparency gives you enough information to make an informed strategic decision.

As we mentioned in a previous post, new development can also make use of operational cost transparency, whether by prototyping and using a usage cost profiler, or by contemplating a cloud deployment using a basic architecture, traffic estimates, and some quantitative analysis. Even if you don't ultimately deploy to a cloud vendor due to internal security or operational control concerns, they are a convenient way to become cognizant of operational costs which are too often ignored (as in development methodologies where development effort is the primary measurement of cost, like many agile methods) or undifferentiated on a per-feature basis (i.e. we may know overall operational costs of an application due to staffing and hardware needed, but we may not be able to break that down into more granular information).

Friday, January 30, 2009
Cloud Computing Appliances

A couple of weeks ago, the New York Times reported on a Cisco announcement that it would start manufacturing what could best be described as "cloud computing appliances": commodity servers with virtualization software pre-installed. I believe the notion here is that you just rack up enough identical boxes to meet your total computational needs, then virtualize all your applications onto that substrate.

This is entirely achievable, too, by the way, as this (mind-blowing, at least for me) demo of 3Tera's AppLogic virtual data center product shows--literal drag and drop, plug and play, connect-the-dots configuration magic. Virtualization software from both VMWare and the open-source Xen hypervisor product both support virtual image migration (i.e. move a running virtual machine from one physical machine onto another one), so as long as your hardware is beefy enough that your largest application component can fit on it, you can swap hardware in and out all day long if you want.

Now, of course, this does assume you want to or have to run your own data centers; economic theory states you won't be able to do it as cheaply as Amazon does with AWS until you're deploying within an order of magnitude as many servers as they have (and they have a lot), and the enterprise market is large enough that there will be economic motivation for the cloud vendors to address current worries about risk management, data security, and operational visibility and control. So interestingly enough, Cisco may find that its largest addressable market consists mainly of the large cloud vendors!

Now, though, if you follow this logic through, it suggests that if you do decide to host a data center in the cloud, you should actually layer your own virtualization on top of it, purely for management purposes, so that you can make efficient use of your EC2 instances, just as virtualization now lets you efficiently use physical hardware. If you can just rack up stacks of identical hardware in your own data center, and have the software in place to manage the virtual infrastructure laid down on them, you also have the wherewithal to just "rack up" more EC2 instances in the same way.

I suspect that with commodity hardware (take your pick of vendors) and commodity virtualization software and operating systems (mature open-source projects like Xen and Linux count as commodity software) that Cisco will have a hard time making the case for their bundled product--riding the price points on both commodity software and hardware will be difficult to say the least, and if they don't, they may find their main competitor in the space is essentially a well-written HOWTO (Linux + Xen + cheap x86) document given away for free, which is never a good competitor to have.

Thursday, January 29, 2009
Cloud Computing Cost Profiler

On Tuesday, William Louth posted an article describing what he calls Activity-Based Costing (ABC)--essentially a profiler for testing a cloud computing implementation. He describes hooks that can be applied around API calls to services like a Google App Engine platform or an Amazon Web Services S3-style service. In turn, when you run a test implementation, you can gather a recording of the API calls you make and get an understanding of how much your implementation will cost in terms of cloud computing charges.

This in itself is a very useful tool, especially when used in conjunction with a rigorous product development process. One such method is the Incremental Funding Method (IFM), which is an ROI-based prioritization scheme for agile software development. This method requires placing a (projected) revenue impact for implementing a feature; it also requires a cost estimate on the feature. Unfortunately, I think far too often development teams provide an estimate simply in terms of level of effort (e.g. story points), without including a formal analysis of operational cost. While this provides one measure of the cost involved in developing the feature, it does not provide a complete picture, often leaving product owners with incomplete information (they assume the feature will simply be "deployed to production").

If you are considering deployments or development on cloud computing infrastructure, you are almost forced to consider operational costs up front. This can be done at two levels: once, with traffic estimates, as an architectural proof-of-concept, and then again, using a profiler like Mr. Louth's to validate the cost models once the implementation is complete. This can save you from what looked like a profitable feature (because easy to implement) but is actually revenue negative (because costly to operate).

I think more development tools like Mr. Louth's are needed (his works for Java, against some mock cloud services), and will no doubt be implemented by the development community in due time. Kudos to Mr. Louth for a great idea.

Wednesday, January 28, 2009
Zimory Public Cloud

On Monday, Deutsche Telekom announced a spinoff startup company called Zimory which aims to create an online marketplace--the Zimory Public Cloud--for elastic cloud computing resources. Companies with spare computing resources can install an agent, offer a certain level of SLA, and then begin selling their excess capacity. Buyers of resources can follow an online provisioning process similar to that found on Amazon Web Services EC2: select a virtual machine image, select a level of service, provide your credit card, and off you go. Zimory presumably takes a cut of the whole transaction.

I won't speculate as to the likelihood of success here; the value of online marketplaces are heavily dependent on network effects, so if they can manage to attract enough willing sellers, they may have an interesting business. However, as indicated in our last post about experience curve effects, economics would tend to dictate that it's probably cheaper to offload more of your data center onto existing cloud providers than it is for you to host your own machines, so it doesn't seem likely that companies are going to have spare cycles sitting around to sell in the long run.

Rather, I'd like to focus on some of the technology impacts the existence of this company reveals. According to this CNET news article covering the launch, a great deal of the software involved is open source software, from the Xen server virtualization project to the various canned server images available ("blank" Linux systems, LAMP stacks, etc.). Mature open source projects are generally the sign of commoditized software.

To me, this suggests that if you do run a data center (which you will for certain critical services which cannot be cloud hosted), then you should be virtualizing your servers as a matter of course. Xen and VMWare support the concept of instance migration, so you can have flexibility in moving your applications around/off machines that need maintenance, and can make efficient use of your hardware resources while sticking to a "one machine for one function" logical administration model.

It's worth noting that while it is possible to spin new instances up and down in a virtualized data center, you must still provide your own high availability (HA) through standard means like load balancers, Linux HA or VMWare HA, RAID disk arrays, etc. You must of course be aware of making sure that instances of highly available services are actually spread across distinct physical infrastructure; this gets more complicated in a virtualized setting. Amazon EC2 offers the notion of "availability zones", which are located in separate data centers; in a single enterprise data center you can approximate this by dividing your servers into two or more virtualized pools and then striping your services across them.

Zimory's fate in the marketplace remains to be seen, but their existence means that server virtualization has reached a level of maturity where it must be considered de rigeur for data center operators.

Tuesday, January 27, 2009
Experience Curves for Data Center Hosting

Are you familiar with the economic theory of "experience curves" (also known as "learning curves")? In the case of cloud computing, it explains why it makes sense not only to outsource new data center costs to cloud providers like Amazon Web Services (AWS), but in fact why it may make sense for you to stop operating a data center at all.

Experience curves were formalized by the Boston Consulting Group (BCG) and describe how production costs tended to fall in a predictable fashion as the number of units produced increased. Namely, the more you produce, the better/quicker/cheaper you get at it. These are usually formalized as a percentage cost: for example, a 75% experience curve means that with each doubling of production, the marginal cost of producing the last unit drops by 25%. So, for example, one unit might cost $100, but the second only costs $75. The fourth costs $56, the eighth $42, etc. Experience curves show a diminishing rate of return:

(Fig. 1: This experience curve graph is a good indicator of why you won't be able to beat Amazon's cloud computing prices.)
Image of a graph showing 75%, 85%, and 90% experience curves

In the cloud computing case, we want to know the marginal cost of deploying and maintaining servers. While we don't know the actual experience curve for that, typical experience curves fall in the 75%-90% range. So let's assume data center server deployment follows a curve in that range as well.

Now by one estimate, Amazon is running around 30,000 servers. Current Elastic Compute Cloud (EC2) pricing shows that a "small" server instance can be rented for $0.10 per hour, or around $72 per month. Now, since Amazon is charging that price retail, we can assume their actual cost is lower; their most recent income statement shows a revenue/cost of revenue ratio of around 1.3 (or a 30% markup). So let's back that out, and say that the small instance costs Amazon $72 / 1.3 = $55 per month.

If you work backwards, at a 75% experience curve, their first server would have cost $3967 per month to deploy. On a 90% experience curve, their first server would have cost $263 per month. (Recall that this includes the cost of hardware, power, staffing, etc.); the range depends on which experience curve you choose. A 75% experience curve means you learn faster than a 90% curve, but it also means Amazon has gotten that much farther ahead of you.

Let's say you are contemplating a data center with 100 servers. Working the curves forward again, this means you might expect to have a per-month cost of between $130 and $587. This is still double what Amazon would charge you retail, even in the best case.

In order to get under what Amazon would charge you retail, you'd have to be deploying somewhere between 5,000 and 15,000 servers to get your marginal cost under the $72 per month Amazon will charge you. Very few companies need data centers that big; are you one of them? Are your engineers and operation staff as good as those at Amazon (i.e. can you actually learn and improve as quickly as they do)? Even if they are, how long will it take you to realize those savings, and what business opportunities are you missing out on while you are making this infrastructure investment?

Now, granted, not all applications are suitable for cloud deployments; some customer data requires levels of security that cannot be provided (yet) by a cloud vendor, although you can expect that vendors will be trying to gain security certifications from well-known parties so they can start hosting more sensitive data. At any rate, for the time being, it is possible to host your more sensitive applications and services yourself, while outsourcing the more cloud-appropriate ones out to a vendor.

Sunday, January 25, 2009
Welcome to Cloud Computing Economics

New "cloud computing" vendors like Amazon Web Services, Google App Engine, and others are changing the game for businesses needing to host Internet applications and services. This site will keep readers up-to-date on new developments in the field, while providing the economic and technical background and analysis needed to make critical business decisions.

Welcome, and enjoy.