Monday, February 23, 2009

Cloud Computing: Cost structure and pricing strategies


Pricing Cloud Services – Cost Structure

The following section provides an overview of the cost structure of providing utility computing services.

Bandwidth: A service like storage has only recently become feasible due to falling bandwidth costs.   As these costs have fallen, so has the ability for firms to move data into a cloud rather than store it locally.  According to GigaOM, wholesale bandwidth costs have fallen from 20% to 40% in the past year, continuing a trend of rapid decline.  

Compute power and storage: We will discuss the utility computing services managers’ capacity decision involved in the following section.  It is important to point out that a number of factors in this area have made utility computing services more feasible and affordable.   Utility computing providers who are innovative about how they configure server farms and get more out of their boxes will gain a cost advantage.  This is not a simple task.  Recent advancements in technology have increased silicon utilization significantly.  Most important has been virtualization technology.  However, this technology is complex to manage and configure.  Innovation generated by a cloud services provided can quickly be deployed across the network without disclosure to other competitors.  Outsourcing server farms to experts at virtualization who are fully incentivized to maximize silicon utilization create values. 

Operational costs: Electricity and labor have been and will continue to be major costs of running a server farm.  Processors use a lot of energy to do the work, and cooling is another major issue.  In many ways, utility compute providers can offer advantages, both in terms of cost and environmental.  They have specifically built server farm locations with renewable energy and lower labor costs.  This must be weighed with the overhead of building costs and other administrative overhead when determine where to locate a data center. 

When we interviewed Elastra CEO Kirill Sheynkman, he said that costing for enterprise computing has been a poorly understood area for a long time.  For instance, historically in the enterprise, electricity costs have been thrown in bucket with overhead and distributed across the organization in some arbitrary way.  That is, the COO gets the electricity bill for the entire company and it is allocated arbitrarily.  This does not make any sense, since for processing intensive businesses these costs can be quite high.  A typical blade server might come with a 2000 watt power supply.  At 6 to 10 cents for watts and hour in the mass market, this can clearly add up to quite a substantial cost.  Since these costs have not been allocated properly, the incentive for managers to keep them low by deploying more energy efficient solutions has not existed.  

Vertically Integrated Players / SaaS Players: Vertically integrated cloud players like Google and SalesForce who provide enterprise applications look at the cost of computing power and bandwidth as part of their overall cost structure.  For these players, developing software that meets user needs and efficiently uses bandwidth and compute power is essential.  Initially, one of the concerns about this business model was the requirement to build out these large server farms to deploy these services to many users.  Many potential SaaS players struggled managing having sufficient capacity versus overbuilding initially.  As these players improved at deploying these services in a cost effective manner, users grew to appreciate not having to worry themselves about disaster recovery, backup and replacing hardware instances and SaaS as a model took off.  

Technical support and services – One component that must not be forgotten is support and services.  While simple offerings are primarily API based, many users will still require support.  In the enterprise, when something fails, the hardware administrator is often heavily involved in the troubleshooting.  When critical systems crash or have performance issues, customers will call a utility computing provider as part of their troubleshooting process.   A recent Gartner report showing what metrics utility computing is based on illustrates how costs are typically structured and passed through, and includes incidents as a cost/charge basis:

 
1. Systems
- # Servers Per Month
- # of VMs
- # CPUs
- Per Rack Installed 
- Per Application instance 

2. Incidents
- Ticket/Requests
- # of Support Calls 
- Per Installation, Move or Change
3.  Per User Basis
- Per user per month
- # Mailboxes
4. Data / processing
- GB data trans / stored
- # instructions
- CPU Hours/Minutes
- # Transactions 
- Terabytes/mo of traffic
 


The Value Delivered by Cloud Services

In order to price utility computing services, it is essential to understand the value delivered.

 Value is created in a number of ways, including:

Scale in disaster recovery and backup
Lower cost and faster speed of deployment and scaling up (additional/initial capacity is available on-demand)
Shared expertise and scale in virtualization and server optimization techniques
Provides enterprises to deploy services with little or no major upfront costs or major projects to execute (SaaS is a major example)

There are also additional advantages in scale and aggregation of demand.  The next section discusses how demand aggregation and variation in needs generates smoothing and creates value for customers.  

Demand aggregation: Key to the value proposition offered by utility computing is the ability to share resources across multiple users.  If the peak computing requirements of a customer are not correlated, then the total peak demand will smooth out.   In G.A. Paleologo’s paper, he illustrates the phenomena with a simple example.  Imagine a customer whose demand oscillates between 5 and 10 service units per day, with an average of 7.5.  If the customer were to build their own computing system, they would need 10 service units per day to meet the peak demand. The average utilization of the system is 7.5/10, or 75%. If there are 8 customers running the same system with the same demand profile, a utility can aggregate them.   The total demand would be smoothed, and the capacity required would be 66, to serve an average demand of 60. The average utilization is 60/66, or 91%, a 16% gain.  (Paleologo, 2004)

In modeling businesses that do have variations, there will be a gamut of profiles.  Obviously, a relatively established business will differ from a new business (for example, a venture-backed startup).  In stochastic modeling, the potential large variation in demand that goes hand-in-hand with some of the dynamics of the Internet must be taken into account.  Established businesses, as well as overall aggregate demand, will have some cyclical profile to the demand curves.

Joe Weinman suggests a few other customer profiles on his ComplexModels.com website that can help smooth out demand.  One is worker versus gamer demand.  Weinman suggests gamers will tend to have day jobs and will be more active nights and weekends.  This means that it will make sense for companies that are building high capacity to support enterprise processing that will primarily happen during the work day to recruit recreationally oriented customers to help smooth capacity and increase utilization during non-working hours.

Another profile is event demand.  Computer power providers with extra capacity can single out event demand. Examples are a concert, a one-time promotion or perhaps an annual event.   These users should have a relatively high willingness to pay, since it makes little sense to deploy a large scale hardware solution for a single event.

Constant Demand Profiles: A substantial portion of the demand for computing services will have constant demand profiles without the peaks and valleys of typical business users.  These users include bioinformatics processing and other scientific simulations that will run around the clock.  Sellers of utility computing services would do well to segment these customers out and offer them lower prices.  First off, since their demand is easily forecast and stable, the value proposition of being able to smooth out demand peaks does not apply to these customers.  As such, their willingness to pay should be lower per unit of computing power.  They will still benefit from shorter time to deployment, disaster recovery and other benefits of utility computing.   These customers offer a significant potential advantage for utility computing vendors – they will provide consistent utilization at lower demand times of the day and year.  In many cases, they may be able to shift their usage to maximize capacity utilization (for example, some customers may need X units per day performed and are indifferent to whether there is one processing node running all day or a number of them running in parallel during off peak hours). 

Other major areas where utility computing adds value are disaster recovery, lower cost of deployment, time to deploy, and cost of deployment

 
Pricing for Utility Computing

Pricing for utility computing services will be challenging in a number of ways, particularly as the service matures.  In traditional pricing models, demand is forecast, the cost of meeting that demand at an optimal level is gathered, and a certain markup is applied.  (Hall and Hitch, 1939; Paleologo, 2004)  Demand will initially be difficult to forecast, and it will take time for economies of scale and demand smoothing from having a large customer base to be realized.  

G.A. Paleologo suggests a pricing-at-risk methodology.  This model leverages stochastic modeling of the uncertain parameters involved in forecasting demand, utilization and adoption.  Such a model would allow for a best and worst case scenario and run optimization models for the scenarios in between.  The result would be a probability curve.  Varying the price as the independent variable and using Net Present Value as the dependent gives a picture of how various scenarios would play out.   The tricky part of this is that the elasticity of demand is not well understood.  Paleologo’s model assumes a monopoly situation – we know that this will not be the case in the utility computing space.  

Monte Carlo Simulation to Test Capacity and Pricing Assumptions

To take it one step further, we suggest a Monte Carlo simulation.  A Monte Carlo simulation will help a manager balance risk and return in determining capacity.  A simulation could help answer questions such as:

If I build this capacity, what % of the time will it successfully meet demand?
What additional cost will it take to increase the % of the time capacity will meet demand?
What is the impact of the possibility of processing node failures?
Based on our firm’s risk profile, what is the optimal amount of capacity to be built?
How sensitive are profitability and pricing decisions we make sensitive to variation?
How does adding a customer or customers affect the risk/return profile?

These key questions can be answered using stochastic modeling and Monte Carlo simulation.  In the electricity utility universe, Monte Carlo analysis is suggested by a number of experts for integrated resource planning.  (Spinney and Watkins, 1995)  This strategy makes sense in both cases as it permits a manager to quantify capacity investments in a risky universe with multiple uncertain variables.  Using this type of simulation, a manager can model different scenarios and pricing strategies under those different constraints.
Capacity and Pricing Decisions in a Competitive World

Most utility computing services will not be offered by monopolies but by a number of larger players.  This means that utility computing will be competitive for these services, especially early in its lifecycle.  Once solutions that have been built on certain platforms get larger and more complex, the switching costs will be higher.  The general sense is that players are seeking to price in a range that allows customers to capitalize on lower cost of deployment, disaster recovery and other scale benefits rather than competing on price with one another.  A price war in this space is not wise for any one. However, through economies of scale, superior management of their server farms, virtualization and capacity related decision making, some players may be able to lower costs and pass those savings onto customers.  Those who cannot will be forced to get out of the business.  

An analysis of Amazon’s current strategy utilizes some price discrimination (e.g. the standard and the high-CPU instance) and then additional tariffs on usage for data transfer, IP addresses and storage.   Costs decrease for customers who bring scale to the system. Amazon EC2 starts with a set of choices for a base instance: Windows vs. Linux and High-CPU vs. Standard.   Windows is 25% more expensive than Linux, reflecting the higher cost of Microsoft’s software versus the open-source stack.  High-CPU instances are more expensive as well.  This segments out processing intensive customers like the gaming developers who need more powerful computers.  This type of approach has gained a lot of traction in SaaS and other services offered out of the cloud, where users are segmented by user type, organization size and feature set requirements.  

Sources

“Price at Risk: A Methodology for Pricing Utility Computer Services”, by G.A. Paleologo, IBM Systems Journal, Volume 43, No 1. 2004
 
“Servers: Why Thrifty Isn’t Nifty”, by Kenneth G. Brill; Forbes Magazine, August 11, 2008

Joe Weinman published at ComplexModels.com

“Wholesale Internet Bandwidth Prices Keep Falling”, by Om Malik, GigaOM; October 7, 2008, published online at http://gigaom.com/2008/10/07/wholesale-internet-bandwidth-prices-keep-falling/
 “Monte Carlo simulation techniques and electric utility resource decisions”, by Peter Spinney and G. Campbell Watkins, Charles River Associates, 1995.
OASIS Reference Model for Service Oriented Architecture 1.0 (http://docs.oasis-open.org/soa-rm/v1.0/soa-rm.pdf)
Gartner Studies Reviewed:
"Market Trends: IT Infrastructure Utility, Worldwide, 2008"
"Vendor Survey Analysis: Investments in Alternative Delivery Models, Global, 2008"
"Alternative Delivery and Acquisition Models, 2008: What's Hot, What's Not"
"Alternative Delivery Models: A Sea of New Opportunities and Threats"
"Infrastructure Utility in Practice: Offerings Description"
"Pricing Poses a Major Challenge for Infrastructure Utility"
"Infrastructure Utility in Practice: Offerings, Data Centers and Clients"
"Data Center Outsourcing References Report on Utility Pricing and Virtualization"
"Gartner on Outsourcing, 2007 to 2008: Utility Delivery Models"
"Advance Your Sourcing Strategy With Gartner's Self-Assessment Tool"
"Best-Practice Process for Creating an IT Services Sourcing Strategy"
"Understand the Challenges and Opportunities of the Market Life Cycle"


Adapted from a section written for MIT Sloan 15.567 - Economics of Information with Professor Erik Brynjolfsson.  This section was writen by Ted Chan, with the help of Andreas Ruggie, Frederic Kerrest and Alex Leary.

3 comments:

Madison said...

Great information about the Cloud Computing certification. Thanks for sharing it here. By the way have you heard about Cloud Computing Conference 2009 which is the world's largest annual and virtual conference covering latest trends and innovations of the Cloud computing. I got a good wonderful opportunity to meet and talk with the world's leading experts of Computing Industry.

mahasiswa teladan said...

hi..Im student from Informatics engineering, this article is very informative, thanks for sharing :)

Blogger said...

Bluehost is ultimately one of the best hosting provider for any hosting services you need.