Wednesday, February 25, 2009

Managing cash flow in higher ed and why it will be easier to get in

Reason #1) Colleges will reserve more slots for full-pay students.

Reason #2) Colleges will enroll more students, as the marginal cost of adding students is small compared to overhead when students are added in small numbers.  For instance, Swarthmore is adding 16 students:

Third, the Board of Managers set a target to enroll 16 extra students on campus, increasing the size of the student body from 1390 to 1406 students. This would mean a first-year class of 390 and 27 transfer students. Welsh explained that “our plan is to increase by 34 students in the next four years, so we’re doing half of that next year… it helps with the budget because there aren’t a lot of marginal costs associated with this kind of small increase, and it’s consistent with the college’s history. We’ve always had a gradual increase in enrollment to support a larger curricular program.”(Swarthmore Daily Gazette)
I think you will see this at many schools to help improve cash flow.  This seems smart in bad economic times, especially since endowments loaded with private equity, real asset and venture capital investments are naturally going to be having problems with liquidity.  It's not just a drop in the bucket though - 16 more paying students is about $800,000 in cash per year at Swarthmore's rates.  That's just about enough to cover new Swarthmore President Rebecca Chopp's salary.  

Tuesday, February 24, 2009

Moca Mobile 5 minute video - Telehealth for the developing world

These guys did an awesome job putting this final video together.  I didn't post it earlier because getting Vimeo to load on Tanzanian broadband (aka 2400 baud) is just a little bit tough.  Here's a video demo of our mobile tele-health solution.


Final Video from Elliot Higger on Vimeo.

Monday, February 23, 2009

Cloud Computing: Cost structure and pricing strategies


Pricing Cloud Services – Cost Structure

The following section provides an overview of the cost structure of providing utility computing services.

Bandwidth: A service like storage has only recently become feasible due to falling bandwidth costs.   As these costs have fallen, so has the ability for firms to move data into a cloud rather than store it locally.  According to GigaOM, wholesale bandwidth costs have fallen from 20% to 40% in the past year, continuing a trend of rapid decline.  

Compute power and storage: We will discuss the utility computing services managers’ capacity decision involved in the following section.  It is important to point out that a number of factors in this area have made utility computing services more feasible and affordable.   Utility computing providers who are innovative about how they configure server farms and get more out of their boxes will gain a cost advantage.  This is not a simple task.  Recent advancements in technology have increased silicon utilization significantly.  Most important has been virtualization technology.  However, this technology is complex to manage and configure.  Innovation generated by a cloud services provided can quickly be deployed across the network without disclosure to other competitors.  Outsourcing server farms to experts at virtualization who are fully incentivized to maximize silicon utilization create values. 

Operational costs: Electricity and labor have been and will continue to be major costs of running a server farm.  Processors use a lot of energy to do the work, and cooling is another major issue.  In many ways, utility compute providers can offer advantages, both in terms of cost and environmental.  They have specifically built server farm locations with renewable energy and lower labor costs.  This must be weighed with the overhead of building costs and other administrative overhead when determine where to locate a data center. 

When we interviewed Elastra CEO Kirill Sheynkman, he said that costing for enterprise computing has been a poorly understood area for a long time.  For instance, historically in the enterprise, electricity costs have been thrown in bucket with overhead and distributed across the organization in some arbitrary way.  That is, the COO gets the electricity bill for the entire company and it is allocated arbitrarily.  This does not make any sense, since for processing intensive businesses these costs can be quite high.  A typical blade server might come with a 2000 watt power supply.  At 6 to 10 cents for watts and hour in the mass market, this can clearly add up to quite a substantial cost.  Since these costs have not been allocated properly, the incentive for managers to keep them low by deploying more energy efficient solutions has not existed.  

Vertically Integrated Players / SaaS Players: Vertically integrated cloud players like Google and SalesForce who provide enterprise applications look at the cost of computing power and bandwidth as part of their overall cost structure.  For these players, developing software that meets user needs and efficiently uses bandwidth and compute power is essential.  Initially, one of the concerns about this business model was the requirement to build out these large server farms to deploy these services to many users.  Many potential SaaS players struggled managing having sufficient capacity versus overbuilding initially.  As these players improved at deploying these services in a cost effective manner, users grew to appreciate not having to worry themselves about disaster recovery, backup and replacing hardware instances and SaaS as a model took off.  

Technical support and services – One component that must not be forgotten is support and services.  While simple offerings are primarily API based, many users will still require support.  In the enterprise, when something fails, the hardware administrator is often heavily involved in the troubleshooting.  When critical systems crash or have performance issues, customers will call a utility computing provider as part of their troubleshooting process.   A recent Gartner report showing what metrics utility computing is based on illustrates how costs are typically structured and passed through, and includes incidents as a cost/charge basis:

 
1. Systems
- # Servers Per Month
- # of VMs
- # CPUs
- Per Rack Installed 
- Per Application instance 

2. Incidents
- Ticket/Requests
- # of Support Calls 
- Per Installation, Move or Change
3.  Per User Basis
- Per user per month
- # Mailboxes
4. Data / processing
- GB data trans / stored
- # instructions
- CPU Hours/Minutes
- # Transactions 
- Terabytes/mo of traffic
 


The Value Delivered by Cloud Services

In order to price utility computing services, it is essential to understand the value delivered.

 Value is created in a number of ways, including:

Scale in disaster recovery and backup
Lower cost and faster speed of deployment and scaling up (additional/initial capacity is available on-demand)
Shared expertise and scale in virtualization and server optimization techniques
Provides enterprises to deploy services with little or no major upfront costs or major projects to execute (SaaS is a major example)

There are also additional advantages in scale and aggregation of demand.  The next section discusses how demand aggregation and variation in needs generates smoothing and creates value for customers.  

Demand aggregation: Key to the value proposition offered by utility computing is the ability to share resources across multiple users.  If the peak computing requirements of a customer are not correlated, then the total peak demand will smooth out.   In G.A. Paleologo’s paper, he illustrates the phenomena with a simple example.  Imagine a customer whose demand oscillates between 5 and 10 service units per day, with an average of 7.5.  If the customer were to build their own computing system, they would need 10 service units per day to meet the peak demand. The average utilization of the system is 7.5/10, or 75%. If there are 8 customers running the same system with the same demand profile, a utility can aggregate them.   The total demand would be smoothed, and the capacity required would be 66, to serve an average demand of 60. The average utilization is 60/66, or 91%, a 16% gain.  (Paleologo, 2004)

In modeling businesses that do have variations, there will be a gamut of profiles.  Obviously, a relatively established business will differ from a new business (for example, a venture-backed startup).  In stochastic modeling, the potential large variation in demand that goes hand-in-hand with some of the dynamics of the Internet must be taken into account.  Established businesses, as well as overall aggregate demand, will have some cyclical profile to the demand curves.

Joe Weinman suggests a few other customer profiles on his ComplexModels.com website that can help smooth out demand.  One is worker versus gamer demand.  Weinman suggests gamers will tend to have day jobs and will be more active nights and weekends.  This means that it will make sense for companies that are building high capacity to support enterprise processing that will primarily happen during the work day to recruit recreationally oriented customers to help smooth capacity and increase utilization during non-working hours.

Another profile is event demand.  Computer power providers with extra capacity can single out event demand. Examples are a concert, a one-time promotion or perhaps an annual event.   These users should have a relatively high willingness to pay, since it makes little sense to deploy a large scale hardware solution for a single event.

Constant Demand Profiles: A substantial portion of the demand for computing services will have constant demand profiles without the peaks and valleys of typical business users.  These users include bioinformatics processing and other scientific simulations that will run around the clock.  Sellers of utility computing services would do well to segment these customers out and offer them lower prices.  First off, since their demand is easily forecast and stable, the value proposition of being able to smooth out demand peaks does not apply to these customers.  As such, their willingness to pay should be lower per unit of computing power.  They will still benefit from shorter time to deployment, disaster recovery and other benefits of utility computing.   These customers offer a significant potential advantage for utility computing vendors – they will provide consistent utilization at lower demand times of the day and year.  In many cases, they may be able to shift their usage to maximize capacity utilization (for example, some customers may need X units per day performed and are indifferent to whether there is one processing node running all day or a number of them running in parallel during off peak hours). 

Other major areas where utility computing adds value are disaster recovery, lower cost of deployment, time to deploy, and cost of deployment

 
Pricing for Utility Computing

Pricing for utility computing services will be challenging in a number of ways, particularly as the service matures.  In traditional pricing models, demand is forecast, the cost of meeting that demand at an optimal level is gathered, and a certain markup is applied.  (Hall and Hitch, 1939; Paleologo, 2004)  Demand will initially be difficult to forecast, and it will take time for economies of scale and demand smoothing from having a large customer base to be realized.  

G.A. Paleologo suggests a pricing-at-risk methodology.  This model leverages stochastic modeling of the uncertain parameters involved in forecasting demand, utilization and adoption.  Such a model would allow for a best and worst case scenario and run optimization models for the scenarios in between.  The result would be a probability curve.  Varying the price as the independent variable and using Net Present Value as the dependent gives a picture of how various scenarios would play out.   The tricky part of this is that the elasticity of demand is not well understood.  Paleologo’s model assumes a monopoly situation – we know that this will not be the case in the utility computing space.  

Monte Carlo Simulation to Test Capacity and Pricing Assumptions

To take it one step further, we suggest a Monte Carlo simulation.  A Monte Carlo simulation will help a manager balance risk and return in determining capacity.  A simulation could help answer questions such as:

If I build this capacity, what % of the time will it successfully meet demand?
What additional cost will it take to increase the % of the time capacity will meet demand?
What is the impact of the possibility of processing node failures?
Based on our firm’s risk profile, what is the optimal amount of capacity to be built?
How sensitive are profitability and pricing decisions we make sensitive to variation?
How does adding a customer or customers affect the risk/return profile?

These key questions can be answered using stochastic modeling and Monte Carlo simulation.  In the electricity utility universe, Monte Carlo analysis is suggested by a number of experts for integrated resource planning.  (Spinney and Watkins, 1995)  This strategy makes sense in both cases as it permits a manager to quantify capacity investments in a risky universe with multiple uncertain variables.  Using this type of simulation, a manager can model different scenarios and pricing strategies under those different constraints.
Capacity and Pricing Decisions in a Competitive World

Most utility computing services will not be offered by monopolies but by a number of larger players.  This means that utility computing will be competitive for these services, especially early in its lifecycle.  Once solutions that have been built on certain platforms get larger and more complex, the switching costs will be higher.  The general sense is that players are seeking to price in a range that allows customers to capitalize on lower cost of deployment, disaster recovery and other scale benefits rather than competing on price with one another.  A price war in this space is not wise for any one. However, through economies of scale, superior management of their server farms, virtualization and capacity related decision making, some players may be able to lower costs and pass those savings onto customers.  Those who cannot will be forced to get out of the business.  

An analysis of Amazon’s current strategy utilizes some price discrimination (e.g. the standard and the high-CPU instance) and then additional tariffs on usage for data transfer, IP addresses and storage.   Costs decrease for customers who bring scale to the system. Amazon EC2 starts with a set of choices for a base instance: Windows vs. Linux and High-CPU vs. Standard.   Windows is 25% more expensive than Linux, reflecting the higher cost of Microsoft’s software versus the open-source stack.  High-CPU instances are more expensive as well.  This segments out processing intensive customers like the gaming developers who need more powerful computers.  This type of approach has gained a lot of traction in SaaS and other services offered out of the cloud, where users are segmented by user type, organization size and feature set requirements.  

Sources

“Price at Risk: A Methodology for Pricing Utility Computer Services”, by G.A. Paleologo, IBM Systems Journal, Volume 43, No 1. 2004
 
“Servers: Why Thrifty Isn’t Nifty”, by Kenneth G. Brill; Forbes Magazine, August 11, 2008

Joe Weinman published at ComplexModels.com

“Wholesale Internet Bandwidth Prices Keep Falling”, by Om Malik, GigaOM; October 7, 2008, published online at http://gigaom.com/2008/10/07/wholesale-internet-bandwidth-prices-keep-falling/
 “Monte Carlo simulation techniques and electric utility resource decisions”, by Peter Spinney and G. Campbell Watkins, Charles River Associates, 1995.
OASIS Reference Model for Service Oriented Architecture 1.0 (http://docs.oasis-open.org/soa-rm/v1.0/soa-rm.pdf)
Gartner Studies Reviewed:
"Market Trends: IT Infrastructure Utility, Worldwide, 2008"
"Vendor Survey Analysis: Investments in Alternative Delivery Models, Global, 2008"
"Alternative Delivery and Acquisition Models, 2008: What's Hot, What's Not"
"Alternative Delivery Models: A Sea of New Opportunities and Threats"
"Infrastructure Utility in Practice: Offerings Description"
"Pricing Poses a Major Challenge for Infrastructure Utility"
"Infrastructure Utility in Practice: Offerings, Data Centers and Clients"
"Data Center Outsourcing References Report on Utility Pricing and Virtualization"
"Gartner on Outsourcing, 2007 to 2008: Utility Delivery Models"
"Advance Your Sourcing Strategy With Gartner's Self-Assessment Tool"
"Best-Practice Process for Creating an IT Services Sourcing Strategy"
"Understand the Challenges and Opportunities of the Market Life Cycle"


Adapted from a section written for MIT Sloan 15.567 - Economics of Information with Professor Erik Brynjolfsson.  This section was writen by Ted Chan, with the help of Andreas Ruggie, Frederic Kerrest and Alex Leary.

Saturday, February 21, 2009

Rapid Reaction: Is Rebecca Chopp the right choice for Swarthmore?

This arrived in my Inbox this morning:

Dear Swarthmoreans,
As Chair of the Board of Managers, I am delighted to announce that at today’s meeting, the Board accepted the Presidential Search Committee’s recommendation to name Rebecca S. Chopp, distinguished scholar and author and current president of Colgate University, as Swarthmore’s 14th president. She is a first-rate scholar and a seasoned and effective leader who has successfully and imaginatively managed Colgate during a time of great accomplishment. Her collaborative style will be a good fit for Swarthmore and I am very pleased that we were able to persuade her to come.

I also extend my thanks and appreciation to the members of the search committee for their dedication and commitment to this process.
Please visit www.swarthmore.edu/newpresident/ for more information. 

Barbara W. Mather ’65
Chair, Board of Managers
I am not quite sure what to make of this.  She seems like an extremely qualified academic and a nice person.  But Swarthmore's prestige and ability to offer an elite eduction will rise and fall with its endowment and fundraising performance, not with high caliber feminist leadership.  That's just a fact.  Al Bloom did some very unpopular things at Swarthmore (to say the least), but he was an effective fundraising, and that has kept us among the elite colleges and best values in college education

Let's take a look at how she did at Colgate - this comes directly from Colgate's Board of Trustees.
Colgate alumni donations, as a percentage of the classes that donate, have fallen over the last five years from 55% to 49% and trail the peer colleges.  Moreover, in the last year, only 7 out of the last 76 classes have hit their alumni participation rate goals.  
Fiscal Year Value of Endowment Percent Alumni Participation
1998-1999 435,500,000 50%
1999-2000 451,000,000 52%
2000-2001 464,535,000 49%
2001-2002 439,437,000 47%
2002-2003 423,406,000 56%
2003-2004 463,436,000 55%
2004-2005 508,665,000 53%
2005-2006 558,000,000 50%
2006-2007 704,600,000 42%

Source: Colgate University 2007 Report to Donors, Summary of Class Giving 
So performance has gotten substantially worse over the past few years.  I don't know how that happens.  56% is very good, in the top 10 for peer groups schools.  42% is pretty darn bad.  Williams, Amherst, the two other schools who Swarthmore alums compare us to most are at 61%.

There's noise in those numbers - I look at them as one of the most important metrics of a college president's ability to manage, mobilize and enthuse alumni.  But, there's data on Chopp's side which probably helped her get hired.  For example:
"Colgate's endowment jumped up nine spots in the rankings for 2007 after one of its best years ever, which included almost $44 million in gifts and an investment return of 22.2 percent."  Source
But, as college president's don't manage endowments, they just fundraise for them.  The $44 million in gifts is a good sign.  Rather thant he student dial-a-thon call I get, I'm sure the college president calls our big alumni donors like Jerome Kohlberg and Eugene Lang.  Those two are on the Board of Directors, so hopefully they rubber-stamped this deal.
I'm hopeful for the Rebecca Chopp area.  At the end of the day, I have confidence in Swarthmore's board of managers to make the right decision.  Having witnessed the recent dean transition at MIT Sloan from Richard Schmalensee to David Schmittlein, I can recall how my initial reaction was to feel a bit insecure about the new leader.  I'll judge Rebecca Chopp on how she takes the reins and begins the process of building a strong rappour with Swarthmore coummnity.  There are strong relationships that will need to be cemented, and I hope she will reach out and build bridges that Al Bloom burned (I don't know if Neil Austrian will take her call though).  

I don't doubt that Swarthmore will continue to be a bright candle of private higher education - but we're all internally self-motivated and competitive people who want the best for the College.  The best words I saw in Barbara Mather's description of Rebecca Chopp were "imaginative" and "collaborative".  These are two very Swarthmorean characteristics, and I hope they will serve her well in her leadership.    Best of luck, Rebecca.

Friday, February 20, 2009

Ridiculous salaries for MSPCA staff while animal shelters close

The MSPCA's flagrant overpaying of its internal staff was revealed today in the Boston Globe today.  A VP of Human Resources for a non-profit being paid over $200,000 a year?  That's absurd.  I hope there's a deeper dive into how these salaries got the way they are.  That's an unreasonable salary for an Executive at a non-profit to be paying themselives unless they are creating A LOT of value.  And there is no freaking way that non-profit lifer at the MSPCA, an animal welfare group that operates in Massachussets, can possibly create that much value.   It may be that this an institutional problem - no person is really going to say no to being offered a high salary if historically that's what they've paid people.  But I'd like to find out how it happened.  The worst part of it to me is that they were shutting down shelters while these execs were paid these sums.


Here are two telling quotes from the Boston Globe's article today.
"In 2007, MSPCA's chief executive officer, Carter Luke, received a salary and benefit package worth $340,595. The vice president of human resources received $215,723, the chief medical officer received $246,337, and the vice president of development received $202,880."
"On Feb. 5, the MSPCA announced the closure of shelters in Brockton, Martha's Vineyard, and Springfield by the end of September and will downsize several programs and departments at its Boston office. MSPCA spokesman Brian Adams said a total of 46 positions would be eliminated. The three shelters slated to close cared for 11,000 animals last year. Four other MSPCA shelters, located in Boston, Methuen, on Cape Cod, and Nantucket, will remain open. Adams declined to reveal the annual operating budgets of any of the shelters."
Unless this changes, I think people who do give the MSPCA (which I wouldn't in a thousand years, unless 2 billion people escape poverty first) should withhold their donations and give it to other causes that do the same thing.  


Thursday, February 19, 2009

The needs of donors: helping a Tanzanian CBO understand how to market to funders

When I was in Tanzania, I did a stakeholder needs exercise with MAdeA, the local CBO I was working with.  I asked them to create a list of what they thought potential funders wanted to see in an organization they would give money to this.  They admitted they had never really thought about this in this manner.  As we delved into it, they began to understand why it was important.  None of their fundraising processes were geared to meet these needs.  They started to understand why they weren't having much success, and many of the tasks that we'd set about helping them do (website, re-writing their pitch, summarizing projects and track record started to make sense, etc.)

The best part was that the realization was really self-generated.  After some faciliation from me, these are the lists they came up with on their own for the needs of two potential donor groups, a Tanzanian business person and an international well-wisher.  

Tanzanian Business Person Needs

1) Influence

2) Credibility – Organization

3) Publicity

4) Need to be persuaded

5) People like to provide basic needs

6) Attention

7) Like to fund work that runs parallel to their interest

8) Profit

9) Gain trust in the community

10) Get linked to the government

11) Personal beliefs

12) Feel good about themselves

13)  A face, a specific person they give to

14) Good match – needs to be a cause they need to be interested in

15) Easy to give – if not cash, then goods that they have easy access to.

16) Track record of success


International Well-wishers

1) Have to explain the real situation of what is happening on the ground

2) Make people understand QUICKLY what you are saying and doing

3) Better understanding of organization QUICKLY

4) Truthfulness and accountability – need to trust you

5) Be specific on what you are doing

6) Know the culture of your funders

7) Get in touch with other international organizations – increases credibility and awareness

8) People like to contribute that are visible

9) Make them feel connected at all times even when you’re not asking them for money

10) Most international donors have specific interests.

11) Illustrate that your dollar goes a long way compared to the US!


Monday, February 16, 2009

Breaking up is hard to do – but it is the right thing for Citi

By Ted Chan, Pooja Goradia and Sharon Cong Geng
Written for our class with Lester Thurow

Risk management in financial services firms is a major challenge even when a company is focused and tightly integrated.  For a firm like Citigroup, broad in scope with less than stellar integration between different operational arms, the financial crisis posed a risk management challenge too great.  While in good times, a financial services supermarket strategy has major advantages, Citi’s focus must now lie in restoring confidence in its core operations.  This translates to focusing on essential businesses, wiping toxic assets off the balances sheet and using those funds to recapitalize its core business.  

Advantages and Disadvantages of a Financial Services Supermarket

In 1999, the Gramm-Leach-Bliley Financial Services Modernization Act repealed the part of the Glass-Steagall Act of 1933 that prohibited a bank from offering investment, commercial banking and insurance services.  (Wikipedia)  This act paved the way for Sandy Weill and Citibank to create a financial services supermarket.  

The benefits to being a financial services supermarket include economies of scale, cross-selling opportunities, elimination of double marginalization problems and more robust integrated reporting.  On the consumer end, customers like the idea of looking to one provider for a range of services and products.  The ability to utilize bundled pricing strategies is also an advantage. 
For instance, in the institutional fund space, many services are sold as a bundle (recordkeeping, custody, asset management, brokerage, securities lending, performance reporting).  Firms with greater economies of scale can offer lower fees (expensive to develop or buy software batch process transactions).  Furthermore, it is easier to sell many of these services as a loss leader and make the profits back on other services.  Thus, achieving these benefits can only be achieved by providing the entire spectrum of services.  

However, things do not always dovetail across lines of business.   For instance, the services that retail banking offers customer versus the services investment banks offer corporations do not overlap or benefit from integration.  For Citibank, decoupling these services will allow for more management visibility into risk profile and agility in specific business segments.  Citi expanded for reasons other than strategic (e.g. Sandy Weill’s desire to conquer the world), unlike organizations like Wells Fargo and BNY Mellon.

There are two major downsides to the supermarket strategy.  The first is that enterprise risk management is much more difficult in a large firm with many different operations.  In many cases, it is simply too much for a single management team to oversee.  Secondly, when one part of a financial services supermarket suffers, contagion to other business lines is easy.  Many of Citi’s losses were caused by non-core businesses such as consumer lending.  These losses have affected the reputation of the entire bank, and thus made it difficult for customers to do business with any operation within the Citi umbrella.  

Structure of Citi Breakup

Citi has already sold its Smith Barney brokerage group, selling a 51% stake for $2.7 billion to Morgan Stanley.   Morgan Stanley will run the brokerage group while Citi retains a minority 49% interest.  In preparation for a large breakup, the company has re-organized into two groups.  Citicorp will be the core businesses including the private bank, investment bank, credit cards and the retail banking franchises.  This group will manage approximately $1.1 trillion in assets.  (Citi Website)

The remainder of Citi’s assets will be allocated to Citi Holdings.  This company will have approximately $850 billion in assets under its management.  This includes the 49% stake in the Morgan Stanley/Smith Barney joint venture.  However, it will also hold many of the troubled assets and riskier operations that Citi considers non-core.  This includes Citigroup Asset Management, and consumer finance groups like CitiMortgage and CitiFinancial.  The so-called “toxic assets” comprises approximately $300 billion of the $850 billion number.  One of the major goals of Citi Holdings will be to sell off these assets to remove the uncertainty from the balance sheet while re-capitalizing Citicorp’s core operations.   (The Street, Citi Website)

Political Backdrop and Effect

Although the government has guaranteed $300 billion of Citi’s toxic assets in an unprecedented intervention, it did not prevent the forthcoming Citi breakup in January 2009.  Regulatory agencies including the FDIC, the SEC, the Fed, and the treasury played an instrumental in Citi’s rescue because they have a strong interest in keeping the financial supermarket in stable condition. It is unrealistic for regulators to allow other troubled “one stop shops” to simply replicate Citi’s model, ruling out any enforced separation of commercial banking from investment banking which dates back to the Glass-Steagall Act.  In today’s age, modern banking is too complicated to allow for such simplistic separations.  

In 1998, Citibank merged with Travelers Group after the Gramm-Leach-Bliley Act was passed to legalize mergers involving commercial and investment banks.  Looking back, politicians argued that the Act was passed to cater to bankers.  Today, in the midst of the global financial crisis, regulators have little choice to modify the laws once again because they are effectively saying they are changing the law back.  It is imperative to reevaluate the regulatory structure and create a system that will avoid a repeat of the current financial meltdown by increasing oversight, managing risks, and creating a system of checks and balances.  Massive reforms need to be instituted such as establishing a capital adequacy regime and changing the mark to market accounting rule. 

Analysis

Selling the brokerage helps against contagion from spreading to this profitable unit that caused none of Citi’s problems.  However, many question how wise a move this is.  For instance, Ladenburg Thalmann analyst Richard Bove said "from a pure business standpoint, this deal makes no sense for Citigroup since Smith Barney did not contribute to Citi's losses, and Citi will be contributing 60% of its profits to the new group, but getting just 49% of the earnings.”
In many ways, the structure of the breakup is designed to help regain customer confidence in Citibank’s core operations.  The first step towards this is removing the toxic assets from the balance sheet, removing riskier business lines, and recapitalizing Citicorp, thus making it look like a “safe” bank.  From this standpoint, the formation of Citi Holdings seems like a good idea.  

However, a major problem will be selling off the bad assets and risky operations at a fair value in this environment.  For instance, few firms are looking to purchase consumer lending operations at this point.  Furthermore, it is difficult to imagine that this structure can put Citi on equal competitive footing with JP Morgan and Wells Fargo any time soon, as both have demonstrated far better banking management before and during the financial meltdown.  Other competitors like Bank of America will have superior regional reach and scope of services after Citi divests. (The Street, Naked Capitalism)

Conclusion:  Supermarkets Can Work, but Not the Way Citi was Structured

Wells Fargo, BNY Mellon and JPMorgan are all large diversified financial services firms with supermarket characteristics to them.  Each firm’s focus on core businesses helped them manage operational and asset risk appropriately across their enterprises.  Citi’s history is different for reasons due partly to corporate strategy, and due partly to former CEO Sandy Weill’s desire to build the biggest mousetrap, Citi’s purview is much broader in scope than other big banks.  Meanwhile, the loose integration of various units was spotty and made risk management far more difficult.  

Sources

Bradway, Bill. “Citigroup: Will Breaking Up Be Hard to Do (Well)?” 15 January 2009, Gerson Lehram Group. 11 February 2009. .
Brewster, Mike, and Amey Stone. The King of Capital: Sandy Weill and the Making of Citigroup. New York: John Wiley & Sons, 2002. 
“Citi to Reorganize into Two Operating Units to Maximize Value of Core Franchise.”16 January 2009. Citi Corporate Website. 12 February 2009. .
"Gramm-Leach-Bliley." Wikipedia, The Free Encyclopedia. 14 Dec 2005, 19:21 UTC. 11 February 2009.
LaCapra, Lauren Tara. “Citi’s Breakup Leaves Street Skeptical.” 16 January 2009, The Street.com. 11 February 2009 http://www.thestreet.com/story/10458265/1/citigroup-reports-83-billion-loss-will-realign-business.html.
Smith, Yves. “The Case Against a Citi Breakup.”13 May 2008, Naked Capitalism. 11 February 2009. .



I won the 100K Executive Summary Competition!

An Executive Summary I wrote won the MIT 100K Executive Summary products and services track.  This is a skill they really develop for you in the MIT Entrepreneurship ecosystem.  I entered this competition last year and the difference between what I entered this year is light years improved.


This felt pretty good. It's pretty good Exec Summary I put together and we've used it to drum up a fair amount of interest for the company.  It's been an interesting project working with great people from Canada so I hope we can see this through to the end for them.

Congrats to all the other winners as well!

Congratulations to the $100K Executive Summary Contest Winners!

DEVELOPMENT
WaveWater

ENERGY
FastCAP

LIFE SCIENCES
Viral Optics


MOBILE
MeterLive

PRODUCTS & SERVICES
Great White

WEB/IT
Ksplice

 $100K Check

AUDIENCE CHOICE AWARD => Ksplice


Thursday, February 12, 2009

When microfinance is impossible, and charity is the only solution

Governments and NGOs alike are selling microfinance as if it can cure all the ails in the developing world.  I have some real concern about this.  First off, economically there are some microfinance just cannot help.   Very few microfinance models can help those who do not already have access to capital to start a business, since the most successful models manage risk in such a way that prevents those who do not have businesses.

Local NGOs like some of the ones we worked with during our time in Tanzania do not have the skills in risk management, financial administration and ability to provide business training.  Yet, they seem microfinance as a way to help their people and pay their own bills.   Unfortunately, this is a dangerous thought to have.  These organizations like the required ability to run a program and do not have appropriate financing.  Furthermore, these organizations want to help the poorest of the poor.  This includes people with HIV who can work only intermittently, and people in areas where there is simply no local economy to start viable business that will survive the time it takes to pay back a loan.  In order to offer these people a loan from a revolving fund, interest rates would have to be in the hundreds.  Remember, it is not the case that any credit is better than no credit.  Give someone a really high interest loan and you are hurting them.  Local NGOs in Africa are in the strange position in that they have to deal with almost everything.  For instance, working with MAdeA in Dar es Salaam, their focus was women victims of domestic violence and sexual assault.  However, working in this poor area of Dar meant that they also had to deal with AIDS, lack of education, lack of jobs, tuberculosis, malaria and drug/alcohol abuse.  However, these can all be linked to one overarching problem – poverty.  Thus the microfinance thrust.

One of the most exciting things I saw in Tanzania was SEBA.  This was a microloan program (3% per month loans, collateralized with home furniture initially).  The initial stages of the program offered training at a cost of approximately $1 per session.  The skin in the game is important – this means women show up and want to get as much out of the training as possible.  Generally, giving anyone something for free in any country means they will take it for granted.  Africa is really no different.  Those who are reliable about training can get a loan for a capital investment, such as a sewing machine or even a cow.  They must be able to put 20% down on this and get two guarantors of the loan.  The big carrot for them is that at the end of the loan repayment period, they get to keep this asset.  Otherwise, it is treated as a lease.  SEBA can first take back the asset, and if its value doesn’t cover the value of the loan, they can go after the guarantor’s money.  Fortunately, this rarely happens as repayment rates are 99%.  

In short, the developing world is a tough place - those who can scrap and save and have an entreprenuerial spark deserve an opportunity, and can get it through innovative financing and training models.  The rest are charity cases - let's not call them anything else.  The right model for them is to give their children a fighting chance to be upwardly mobile.  

Thursday, February 5, 2009

The Future of Mobile Network Operators - Strategies to Avoid Becoming a Dumb Pipe

Lots of people have been asking me for this deck that I did on strategies for Mobile Network Operators (MNOs) to avoid becoming a dumb pipe.  This PDF version should show up better than the .pptx I posted before.  Please attribute any usage to Ted Chan - Powerpoint version available upon request.

Monday, February 2, 2009

MIT E&I in the blogosphere

Two pieces worth reading:

An editorial by my friend and MIT E&I colleague Ryan Buckley in the excellent Mass High Tech blog.