Last week, Forbes and ZDnet posted articles discussing the cost of various cloud (451 source material behind wall) full of dollar per hour costs analysis. Their analysis talks about private infrastructure being an order of magnitude cheaper (yes, cheaper) to own than public cloud; however, the open source price advantages offered by OpenStack are swallowed by added cost of finding skilled operators and its lack of maturity.
At the end of the day, operational concerns are the differential factor.
These articles get tied down into trying to normalize clouds to $/vm/hour analysis and buried the lead that the operational decisions about what contributes to cloud operational costs. I explored this a while back in my “magic 8 cube” series about six added management variations between public and private clouds.
In most cases, operations decisions is not just about cost – they factor in flexibility, stability and organizational readiness. From that perspective, the additional costs of public clouds and well-known stacks (VMware) are easily justified for smaller operations. Using alternatives means paying higher salaries and finding talent that requires larger scale to justify.
Operational complexity is a material cost that strongly detracts from new platforms (yes, OpenStack – we need to address this!)
Unfortunately, it’s hard for people building platforms to perceive the complexity experienced by people outside their community. We need to make sure that stability and operability are top line features because complexity adds a very real cost because it comes directly back to cost of operation.
In my thinking, the winners will be solutions that reduce BOTH cost and complexity. I’ve talked about that in the past and see the trend accelerating as more and more companies invest in ops automation.