Notes from 2011 Cloud Connect Event Day 2 (#ccevent)

With the OpenStack launch behind me, I have some time to attend the Cloud Connect Event.  I missed all the DevOps sessions, but was getting to geek out on the NoSQL & Big Data sessions.   I jumped to the private cloud track (based on Twitter traffic) and was rewarded for the shift.

I’m surprised at how much focus this cloud conference is dedicated to private cloud.  At other cloud conferences I’ve attended, the focus has been on learning how to use the cloud (specifically the public cloud).  This is the first cloud show I’ve attended that has so much emphasis, dialog and vendor feeding around private.  This was a suits & slacks show with few jeans, t-shirts, and pony tails.  Perhaps private cloud is where the $$$ is being spent now?

It definitely feels like using cloud has become assumed, but the best practices and tools are just emerging.

The twitter #ccevent stream is interesting but temporal.  I’m posting my raw (spelling optional) notes (below the more tag) because there is a lot of great content from the show to support and extend the twitter stream.  I’ll try to italicize some of the better lines.

Continue reading

Why cloud compute will be free

Today at Dell, I was presenting to our storage teams about cloud storage (aka the “storage banana”) and Dave “Data Gravity” McCrory reminded me that I had not yet posted my epiphany explaining “why cloud compute will be free.”  This realization derives from other topics that he and I have blogged but not stated so simply.

Overlooking that fact that compute is already free at Google and Amazon, you must understand that it’s a cloud eat cloud world out there where losing a customer places your cloud in jeopardy.  Speaking of Jeopardy…

Answer: Something sought by cloud hosts to make profits (and further the agenda of our AI overlords).

Question: What is lock-in?

Hopefully, it’s already obvious to you that clouds are all about data.  Cloud data takes three primary forms:

  1. Data in transformation (compute)
  2. Data in motion (network)
  3. Data at rest (storage)

These three forms combine to create cloud architecture applications (service oriented, externalized state).

The challenge is to find a compelling charge model that both:

  1. Makes it hard to leave your cloud AND
  2. Encourages customers to use your resources effectively (see #1 in Azure Top 20 post)

While compute demands are relatively elastic, storage demand is very consistent, predictable and constantly grows.  Data is easily measured and difficult to move.  In this way, data represents the perfect anchor for cloud customers (model rule #1).  A host with a growing data consumption foot print will have a long-term predictable revenue base.

However, storage consumption along does not encourage model rule #2.  Since storage is the foundation for the cloud, hosts can fairly judge resource use by measuring data egress, ingress and sidegress (attrib @mccrory 2/20/11).  This means tracking not only data in and out of the cloud, but also data transacted between the providers own cloud services.  For example, Azure changes for both data at rest ($0.15/GB/mo) and data in motion ($0.01/10K).

Consequently, the financially healthiest providers are the ones with most customer data.

If hosting success is all about building a larger, persistent storage footprint then service providers will give away services that drive data at rest and/or in motion.  Giving away compute means eliminating the barrier for customers to set up web sites, develop applications, and build their business.  As these accounts grow, they will deposit data in the cloud’s data bank and ultimately deposit dollars in their piggy bank.

However, there is a no-free-lunch caveat:  free compute will not have a meaningful service level agreement (SLA).  The host will continue to charge for customers who need their applications to operate consistently.  I expect that we’ll see free compute (or “spare compute” from the cloud providers perspective) highly used for early life-cycle (development, test, proof-of-concept) and background analytic applications.

The market is starting to wake up to the idea that cloud is not about IaaS – it’s about who has the data and the networks.

Oh, dem golden spindles!  Oh, dem golden spindles!

32nd rule to measure complexity + 6 hyperscale network design rules

If you’ve studied computer science then you know there are algorithms that calculate “complexity.” Unfortunately, these have little practical use for data center operators.  My complexity rule does not require a PhD:

The 32nd rule: If it takes more than 30 seconds to pick out what would be impacted by a device failure then your design is too complex.

6 Hyperscale Network Design Rules

  1. Cost Matters
  2. Keep Networks Flat
  3. Filter at the Edge
  4. Design Fault Zones
  5. Plan for Local Traffic
  6. Offer load balancers (to your users)

Sorry for the teaser… I’ll be able to release more substance behind this list soon.   Until then comments are (as always) welcome!

 

 

 

Jevon’s Paradox

I’ve been finding it necessary to quote Jevon’s paradox several times lately and realized that I have NOT referenced it here.  Quite simply, understanding Jevon’s paradox is essential to understanding cloud.

The concept of the paradox is that when we make something more efficient (for example gas in cars), the demand for that resource goes up (we move further into the exurbs because driving is cheaper).  Notably, as Moore’s law drives computer efficiency up, we are using more and more computers.  Specifically, I have more computers in my house every year even though the efficiency of just one smart phone far (far) exceeds power of my son’s Sinclair 1000.

In cloud computing, Jevon’s paradox points us to the expectation that the rush of applications and activity in the cloud will continue to accelerate.  Since I expect competition and Moore’s law to drive increasing gains in cloud efficiency (and therefore customer advantageous price signals) the market will happily convert these utilization improvements into more and more interesting capabilities.

The cloud expansion means that we can sustain more providers entering the market.  In fact, Jevon would tell us that more providers will likely INCREASE demand for cloud as competition and capacity put downward pressure on prices.  [Q: where will they make up the margin?   A: Adjacent Services]

The loser in cloud’s exploration of Jevon’s paradox are non-cloud deployments (Dell strategists are you listening?).  These systems suffer because their ability to improve their efficiency is limited.

As I look down the road on cloud, I can see many opportunities for current applications to take advantage of cheaper cloud resources to provide even more value.  For example, adding map-reduce analytics to scan a customer’s data can provide tremendous insights.  Today, it’s a luxury like flying on the Concorde.  Tomorrow, it will be part like hopping the Nerd Bird from San Jose to Austin – just a normal part of our daily lives.

 

Note: A shout out to Dave McCrory who introduced me to Jevon’s Paradox.

Cloud Culture Clash Creates Opportunities

In my opinion, one of the biggest challenges facing companies like Dell, my employer, is how to help package and deliver this thing called cloud into the market.  I recently had the opportunity to watch and listen to customers try to digest the concept of PaaS.

While not surprising, the technology professionals in the room split into across four major cultural camps: enterprise vs. start-up and dev vs. ops.  Because I have a passing infatuation with pastel cloud shaped quadrant graphs, I was able to analyze the camps for some interesting insights.

The camps are:

  1. Imperialists:  These enterprise type developers are responsible for adapting their existing business to meet the market.  They prefer process oriented tools like Microsoft .Net and Java that have proven scale and supportability.
  2. MacGyvers: These startup type developers are under the gun to create marketable solutions before their cash runs out.  They prefer tools that adapt quick, minimize development time and community extensions.
  3. Crown Jewels: These enterprise type IT workers have to keep the email and critical systems humming.  When they screw up everyone notices.  They prefer systems where they can maintain control, visibility, or (better) both.
  4. Legos: These start-up type operations jugglers are required to be nimble and responsive with shoestring budgets.   They prefer systems that they can change and adapt quickly.  They welcome automation as long as they can maintain control, visibility, or (better) both.

This graph is deceiving because it underplays the psychological break caused by willingness to take risks.  This break creates a cloud culture chasm. 

On one side, the reliable Imperialists want will mount a Royal Navy flotilla to protect the Crown Jewels in a massive show of strength.  They are concerned about the security and reliability of cloud technologies.

On the other side, the MacGyvers are working against a ticking time bomb to build a stealth helicopter from Legos they recovered from Happy Meals™.  They are concerned about getting out of their current jam to compile another day.

Normally Imperialists simply ignore the MacGyvers or run down the slow ones like yesterday’s flotsam.  The cloud is changing that dynamic because it’s proving to be a dramatic force multiplier in several ways:

  1. Lower cost of entry – the latest cloud options (e.g. GAE) do not charge anything unless you generate traffic.  The only barrier to entry is an idea and time.
  2. Rapid scale – companies can fund growth incrementally based on success while also being able to grow dramatically with minimal advanced planning.
  3. Faster pace of innovation – new platforms, architectures and community development has accelerated development.  Shared infrastructure means less work on back office and more time on revenue focused innovation.
  4. Easier access to customers – social media and piggy backing on huge SaaS companies like Facebook, Google or SalesForce bring customers to new companies’ front doors.  This means less work on marketing and sales and more time on revenue focused innovation.

The bottom line is that the cloud is allowing the MacGyvers to be faster, stronger, and more innovative than ever before.  And we can expect them to be spending even less time polishing the brass in the back office because current SaaS companies are working hard to help make them faster and more innovative.

For example, Facebook is highly incented for 3rd party applications to be innovative and popular not only because they get a part of the take, but because it increases the market strength of their own SaaS application.

So the opportunity for Imperialists is to find a way for employee and empower the MacGyvers.  This is not just a matter of buying a box of Legos: the strategy requires tolerating enabling embracing a culture of revenue focused innovation that eliminates process drag.  My vision does not suggest a full replacement because the Imperialists are process specialists.  The goal is to incubate and encapsulate cloud technologies and cultures.

So our challenge is more than picking up cloud technologies, it’s understanding the cloud communities and cultures that we are enabling.

Cloud Gravity & Shards

This post is the final post laying out a rethinking of how we view user and buyer motivations for public and private clouds.

In part 1, I laid out the “magic cube” that showed a more discrete technological breakdown of cloud deployments (see that for the MSH, MDH, MDO, UDO key).  In part 2, I piled higher and deeper business vectors onto the cube showing that the cost value of the vertices was not linear.  The costs were so unequal that they pulled our nice isometric cube into a cone.

The Cloud Gravity Well

To help make sense of cloud gravity, I’m adding a qualitative measure of friction.

Friction represents the cloud consumer’s willingness to adopt the requirements of our cloud vertices.  I commonly hear people say they are not willing to put sensitive data “in the cloud” or they are worried about a “lack of security.”  These practical concerns create significant friction against cloud adoption; meanwhile, being able to just “throw up” servers (yuck!) and avoiding IT restrictions make it easy (low friction) to use clouds.

Historically, it was easy to plot friction vs. cost.  There was a nice linear trend where providers simply lowered cost to overcome friction.  This has been fueling the current cloud boom.

The magic cube analysis shows another dynamic emerging because of competing drivers from management and isolation.  The dramatic saving from outsource management are inhibited by the high friction for giving up data protection, isolation, control, and performance minimums.  I believe that my figure, while pretty, dramatically understates the friction gap between dedicated and shared hosting.  This tension creates a non-linear trend in which substantial customer traction will follow the more expensive offerings.  In fact, it may be impossible to overcome this friction with pricing pressure.

I believe this analysis shows that there’s a significant market opportunity for clouds that have dedicated resources yet are still managed and hosted by a trusted 3rd party.  On the other hand, this gravity well could turn out to be a black hole money pit.  Like all cloud revolutions, the timid need not apply.

Post Script: Like any marketing trend, there must be a name.  These clouds are not “private” in the conventional sense and I cringe at using “hybrid” for anything anymore.  If current public clouds are like hotels (or hostels) then these clouds are more like condos or managed community McMansions.  I think calling them “cloud shards” is very crisp, but my marketing crystal ball says “try again.”  Suggestions?

Cloud Business Vectors

In part 1 of this series, I laid out the “magic cube” that describes 8 combinations for cloud deployment.  The cube provides a finer grain understanding than “public” vs. “private” clouds because we can now clearly see that there are multiple technology axis that create “private IT” that can be differentiated. 

 The axis are: (detailed explanation)

  • X. Location: Hosted vs. On-site
  • Y. Isolation: Shared vs. Dedicated
  • Z. Management: Managed vs. Unmanaged

Cloud Cost Model

In this section, we take off our technologist pocket protectors and pick up our finance abacus.  The next level of understanding for the magic cube is to translate the technology axis into business vectors.  The business vectors are:

X. Capitalization:  OpEx vs. CapEx.  On the surface, this determines who has ownership of resource, but the deeper issue is if the resource is an investment (capital) or consumable (operations).   Unless you’re talking about a co-lo cage, hosting models will be consumable because the host is leveraging (in the financial sense) their capital into operating revenue.

Y. Commitment: PayGo vs. Fixed.  Like a cell phone plan, you can pay for what you use (Pay-as-you-go) or lock-in to a fixed contract.  Fixed is generally pays a premium based on volume even though the per unit cost may be lower.  In my thinking, the fixed contract may include dedicated resource guarantees and additional privacy.

Z. Management: Insource vs. Outsource.  Don’t over think this vector, but remember I not talking about off shoring!  If you are directly paying people to manage your cloud then you’ve insourced management.  If the host provides services, process or automation that reduces hiring requirements then you’re outsourcing it IT.  It’s critical to realize that you can’t employee fractional people.  There are fundamental cloud skillsets and tools that must be provided to operate a cloud (including, not limited to DevOps).

THE 3 VECTORS ARE NOT EQUAL!

If you were willing to do some cerebral calisthenics about these vectors then you realized that they are not equal cost weights.  Let’s look at them from least to most.

  1. The commitment vector is very easy to traverse this vector in either direction.  It’s well established human behavior that we’ll pay more for to be more predictable, especially if that means we get more control or privacy.  If I had had a dollar for everyone who swoons over cloud bursting I’d go buy that personal jet pack.
  2. The capitalization vector has is part of the driver to cloud as companies (and individuals) seek to avoid buying servers up front.  It also helps that clouds let you buy factional servers and “throw away” servers that you don’t need.  While these OpEx aspects of cloud are nice, servers are really not that expensive to lease or idle.  Frankly, it’s the deployment and management of those assets that drives the TCO way up for CapEx clouds, but that’s not this vector so move along.
  3. The management vector is the silverback gorilla standing in the corner of our magic cube.  Acquiring and maintaining the operations expertise is a significant ongoing expense.  In many cases, companies simply cannot afford to adequately cover the needed skills and this severely limits their ability to use technology competitively.  Hosts are much better positioned to manage cloud infrastructure because they enjoy economies of scale distributed between multiple customers.  This vector is heavily one directional – once you fly without that critical Ops employee in favor of a host doing the work, it is unlikely you’ll hire that role.

The unequal cost weights pull our cube out of shape.  They create a strong customer pull away from the self-managed & CapEx vertices and towards outsourced & OpEx.  I think of this distortion as a cloud gravity well that pulls customers down from private into public clouds. 

That’s enough for today.  You’ll have to wait for the gravity well analysis in part 3.