Cloud Culture Clash Creates Opportunities

In my opinion, one of the biggest challenges facing companies like Dell, my employer, is how to help package and deliver this thing called cloud into the market.  I recently had the opportunity to watch and listen to customers try to digest the concept of PaaS.

While not surprising, the technology professionals in the room split into across four major cultural camps: enterprise vs. start-up and dev vs. ops.  Because I have a passing infatuation with pastel cloud shaped quadrant graphs, I was able to analyze the camps for some interesting insights.

The camps are:

  1. Imperialists:  These enterprise type developers are responsible for adapting their existing business to meet the market.  They prefer process oriented tools like Microsoft .Net and Java that have proven scale and supportability.
  2. MacGyvers: These startup type developers are under the gun to create marketable solutions before their cash runs out.  They prefer tools that adapt quick, minimize development time and community extensions.
  3. Crown Jewels: These enterprise type IT workers have to keep the email and critical systems humming.  When they screw up everyone notices.  They prefer systems where they can maintain control, visibility, or (better) both.
  4. Legos: These start-up type operations jugglers are required to be nimble and responsive with shoestring budgets.   They prefer systems that they can change and adapt quickly.  They welcome automation as long as they can maintain control, visibility, or (better) both.

This graph is deceiving because it underplays the psychological break caused by willingness to take risks.  This break creates a cloud culture chasm. 

On one side, the reliable Imperialists want will mount a Royal Navy flotilla to protect the Crown Jewels in a massive show of strength.  They are concerned about the security and reliability of cloud technologies.

On the other side, the MacGyvers are working against a ticking time bomb to build a stealth helicopter from Legos they recovered from Happy Meals™.  They are concerned about getting out of their current jam to compile another day.

Normally Imperialists simply ignore the MacGyvers or run down the slow ones like yesterday’s flotsam.  The cloud is changing that dynamic because it’s proving to be a dramatic force multiplier in several ways:

  1. Lower cost of entry – the latest cloud options (e.g. GAE) do not charge anything unless you generate traffic.  The only barrier to entry is an idea and time.
  2. Rapid scale – companies can fund growth incrementally based on success while also being able to grow dramatically with minimal advanced planning.
  3. Faster pace of innovation – new platforms, architectures and community development has accelerated development.  Shared infrastructure means less work on back office and more time on revenue focused innovation.
  4. Easier access to customers – social media and piggy backing on huge SaaS companies like Facebook, Google or SalesForce bring customers to new companies’ front doors.  This means less work on marketing and sales and more time on revenue focused innovation.

The bottom line is that the cloud is allowing the MacGyvers to be faster, stronger, and more innovative than ever before.  And we can expect them to be spending even less time polishing the brass in the back office because current SaaS companies are working hard to help make them faster and more innovative.

For example, Facebook is highly incented for 3rd party applications to be innovative and popular not only because they get a part of the take, but because it increases the market strength of their own SaaS application.

So the opportunity for Imperialists is to find a way for employee and empower the MacGyvers.  This is not just a matter of buying a box of Legos: the strategy requires tolerating enabling embracing a culture of revenue focused innovation that eliminates process drag.  My vision does not suggest a full replacement because the Imperialists are process specialists.  The goal is to incubate and encapsulate cloud technologies and cultures.

So our challenge is more than picking up cloud technologies, it’s understanding the cloud communities and cultures that we are enabling.

Cloud Gravity & Shards

This post is the final post laying out a rethinking of how we view user and buyer motivations for public and private clouds.

In part 1, I laid out the “magic cube” that showed a more discrete technological breakdown of cloud deployments (see that for the MSH, MDH, MDO, UDO key).  In part 2, I piled higher and deeper business vectors onto the cube showing that the cost value of the vertices was not linear.  The costs were so unequal that they pulled our nice isometric cube into a cone.

The Cloud Gravity Well

To help make sense of cloud gravity, I’m adding a qualitative measure of friction.

Friction represents the cloud consumer’s willingness to adopt the requirements of our cloud vertices.  I commonly hear people say they are not willing to put sensitive data “in the cloud” or they are worried about a “lack of security.”  These practical concerns create significant friction against cloud adoption; meanwhile, being able to just “throw up” servers (yuck!) and avoiding IT restrictions make it easy (low friction) to use clouds.

Historically, it was easy to plot friction vs. cost.  There was a nice linear trend where providers simply lowered cost to overcome friction.  This has been fueling the current cloud boom.

The magic cube analysis shows another dynamic emerging because of competing drivers from management and isolation.  The dramatic saving from outsource management are inhibited by the high friction for giving up data protection, isolation, control, and performance minimums.  I believe that my figure, while pretty, dramatically understates the friction gap between dedicated and shared hosting.  This tension creates a non-linear trend in which substantial customer traction will follow the more expensive offerings.  In fact, it may be impossible to overcome this friction with pricing pressure.

I believe this analysis shows that there’s a significant market opportunity for clouds that have dedicated resources yet are still managed and hosted by a trusted 3rd party.  On the other hand, this gravity well could turn out to be a black hole money pit.  Like all cloud revolutions, the timid need not apply.

Post Script: Like any marketing trend, there must be a name.  These clouds are not “private” in the conventional sense and I cringe at using “hybrid” for anything anymore.  If current public clouds are like hotels (or hostels) then these clouds are more like condos or managed community McMansions.  I think calling them “cloud shards” is very crisp, but my marketing crystal ball says “try again.”  Suggestions?

Cloud Business Vectors

In part 1 of this series, I laid out the “magic cube” that describes 8 combinations for cloud deployment.  The cube provides a finer grain understanding than “public” vs. “private” clouds because we can now clearly see that there are multiple technology axis that create “private IT” that can be differentiated. 

 The axis are: (detailed explanation)

  • X. Location: Hosted vs. On-site
  • Y. Isolation: Shared vs. Dedicated
  • Z. Management: Managed vs. Unmanaged

Cloud Cost Model

In this section, we take off our technologist pocket protectors and pick up our finance abacus.  The next level of understanding for the magic cube is to translate the technology axis into business vectors.  The business vectors are:

X. Capitalization:  OpEx vs. CapEx.  On the surface, this determines who has ownership of resource, but the deeper issue is if the resource is an investment (capital) or consumable (operations).   Unless you’re talking about a co-lo cage, hosting models will be consumable because the host is leveraging (in the financial sense) their capital into operating revenue.

Y. Commitment: PayGo vs. Fixed.  Like a cell phone plan, you can pay for what you use (Pay-as-you-go) or lock-in to a fixed contract.  Fixed is generally pays a premium based on volume even though the per unit cost may be lower.  In my thinking, the fixed contract may include dedicated resource guarantees and additional privacy.

Z. Management: Insource vs. Outsource.  Don’t over think this vector, but remember I not talking about off shoring!  If you are directly paying people to manage your cloud then you’ve insourced management.  If the host provides services, process or automation that reduces hiring requirements then you’re outsourcing it IT.  It’s critical to realize that you can’t employee fractional people.  There are fundamental cloud skillsets and tools that must be provided to operate a cloud (including, not limited to DevOps).

THE 3 VECTORS ARE NOT EQUAL!

If you were willing to do some cerebral calisthenics about these vectors then you realized that they are not equal cost weights.  Let’s look at them from least to most.

  1. The commitment vector is very easy to traverse this vector in either direction.  It’s well established human behavior that we’ll pay more for to be more predictable, especially if that means we get more control or privacy.  If I had had a dollar for everyone who swoons over cloud bursting I’d go buy that personal jet pack.
  2. The capitalization vector has is part of the driver to cloud as companies (and individuals) seek to avoid buying servers up front.  It also helps that clouds let you buy factional servers and “throw away” servers that you don’t need.  While these OpEx aspects of cloud are nice, servers are really not that expensive to lease or idle.  Frankly, it’s the deployment and management of those assets that drives the TCO way up for CapEx clouds, but that’s not this vector so move along.
  3. The management vector is the silverback gorilla standing in the corner of our magic cube.  Acquiring and maintaining the operations expertise is a significant ongoing expense.  In many cases, companies simply cannot afford to adequately cover the needed skills and this severely limits their ability to use technology competitively.  Hosts are much better positioned to manage cloud infrastructure because they enjoy economies of scale distributed between multiple customers.  This vector is heavily one directional – once you fly without that critical Ops employee in favor of a host doing the work, it is unlikely you’ll hire that role.

The unequal cost weights pull our cube out of shape.  They create a strong customer pull away from the self-managed & CapEx vertices and towards outsourced & OpEx.  I think of this distortion as a cloud gravity well that pulls customers down from private into public clouds. 

That’s enough for today.  You’ll have to wait for the gravity well analysis in part 3.

Shaken or stirred? Cloud Cocktail leads to insights

Part of my perfessional & personal mission is to kick over mental ant hills.  In the cloud space, I believe that people are trying way too hard to define cloud into neat little buckets.  That leads me to try and reorient around new visualizations.  The purpose of doing this is to strip away historical thought patterns that limit our ability to envision future patterns (meaning: attitude adjustment).

The Cloud Cocktail

With that overly erudite preamble, here’s a tasty potion that I mixed up for you to enjoy on your way to real libations at ACL.

The technologies underlying cloud are complex; however, the core components for cloud are simple: applications, networked services and virtualized infrastructure.  These three components in varying proportions garnished with management APIs form the basis for all cloud solutions. 

This cocktail napkin sketch of a cloud may appear sparse, but it provides the key insights that drive a vision for how to adapt and respond to clouds’ rapid metamorphosis.  It would be ideal to point to a single set of technologies and declare that it is a Cloud; unfortunately, cloud is a transformation, not an end-state. 

PaaS, much ado about network services

There’s a surprising about of a hair pulling regarding IaaS vs PaaS.  People in the industry get into shouting matches about this topic as if it mattered more than Lindsay Lohan’s journey through rehab.

The cold hard reality is that while pundits are busy writing XaaS white papers, developers are off just writing software.  We are writing software that fits within cloud environments (weak SLA, small VMs), saves money (hosted data instead of data in VMs), and changes quickly (interpreted languages).  We’re doing using an expanding tool kit of networked components like databases, object stores, shared cache, message queue, etc.

Using network components in an application architecture is about as novel as building houses made of bricks.  So, what makes cloud architectures any better or different?

Nothing!  There is no difference if you buy VMs, install services, and wire together your application in its own little cloud bubble.  If I wanted to bait trolls, I’d call that an IaaS deployment.

However, there’s an emerging economic driver to leverage lower cost and more elastic infrastructure by using services provided by hosts rather than standing them up in a VM.  These services replace dedicated infrastructure with managed network attached services and they have become a key differentiator for all the cloud vendors

  • At Google App Engine, they include Big Tables, Queues, MemCache, etc
  • At Microsoft Azure, they include SQL Azure, Azure Storage, AppFabric, etc
  • At Amazon AWS, they include S3, SimpleDB, RDS (MySQL), Queue & Notify, etc

Using these services allows developers to focus on the business problems we are solving instead of building out infrastructure to run our applications.  We also save money because consuming an elastic managed network service is less expensive (and more consumption based) than standing up dedicated VMs to operate the services.

Ultimately, an application can be written as stateless code (really “externalized state” is more a accurate description) that relies on these services for persistence.  If a host were to dynamically instantiate instances of that code based on incoming requests then my application resource requirements would become strictly consumption based.   I would describe that as true cloud architecture. 

On a bold day, I would even consider an environment that enforced offered that architecture to be a platform.  Some may even dare to describe that as a PaaS; however, I think it’s a mistake to look to the service offering for the definition when it’s driven by the application designers’ decisions to use network services.

While we argue about PaaS vs IaaS, developers are just doing what they need.  Today they may stand-up their own services and tomorrow they incorporate 3rd party managed services.  The choice is not a binary switch, a layer cake, or a holy war.

The choice is about choosing the right most cost effective and scalable resource model.

McCrory on “Cloud Confusion”

or, why is everyone DaaZed and Confused?

Dave McCrory, my co-worker at Dell, posted an interesting analysis of how different roles people have in IT jobs dramatically influences their perception of cloud services.

I think that part of the confusion is how difficult it is for each category of cloud user to see their challenges/issues for the other classes of user.

We see this in spades during internal PaaS discussions.  People with development backgrounds has a fundamentally different concept of a PaaS benefits.  In many cases, those same benefits (delegation to a provider for core services like database) are considered disadvantages for the other class of user (you want someone else to manage what!).

Ultimately, the applications are at the core of any XaaS conversation and define what “type” of cloud need to be consumed.

DevOps: There’s a new sheriff in Cloudville

DevOps SherrifLately there’s a flurry of interest (and hiring demand) for DevOps gurus.  It’s obvious to me that there’s as much agreement between the ethical use of ground unicorn horn as there is about the job description of a DevOps tech.

I look at the world very simply:

  • Developers = generate revenue
  • Ops = control expenses
  • DevOps = write code, setup infrastructure, ??? IDK!

Before I risk my supply of ethically obtained unicorn powder by defining DevOps, I want to explore why DevOps is suddenly hot.  With cloud driving horizontal scale applications (see RAIN posts), there’s been a sea change in the type of expertise needed to manage an application.

Stereotypically, Ops teams get code over the transom from Dev teams.  They have the job of turning the code into a smoothly running application.  That requires rigid controls and safe guards.  Traditionally, Ops could manage most of the scale and security aspects of an application with traditional scale-up, reliability, and network security practices.  These practices naturally created some IT expense and policy rigidity; however, that’s what it takes to keep the lights on with 5 nines (or 5 nyets if you’re an IT customer).

Stereotypically, Dev teams live a carpe diem struggle to turn their latest code into deployed product with the least delay.  They have the job of capturing mercurial customer value by changing applications rapidly.  Traditionally, they have assumed that problems like scale, reliability, and security could be added after the fact or fixed as they are discovered.  These practices naturally created a need to constantly evolve.

In the go-go cloud world, Dev teams are by-passing Ops by getting infrastructure directly from an IaaS provider.  Meanwhile, IaaS does not provide Ops the tools, access, and controls that they have traditionally relied on for control and management.  Consequently, Dev teams have found themselves having to stage, manage and deploy applications with little expertise in operations.  Further, Ops teams have found themselves handed running cloud applications that they have to secure, scale and maintain applications without the tools they have historically relied on.

DevOps has emerged as the way to fill that gap.  The DevOps hero is comfortable flying blind on an outsourced virtualized cloud, dealing with Ops issues to tighten controls and talking shop with Dev to make needed changes to architecture.  It’s a very difficult job because of the scope of skills and the utter lack of proven best practices.

So what is a day in the life of a DevOp?   Here’s my list:

  • Design and deploy scale out architecture
  • Identify and solve performance bottlenecks
  • Interact with developers to leverage cloud services
  • Interact with operations to integrate with enterprise services
  • Audit and secure applications
  • Manage application footprint based on scale
  • Automate actions on managed infrastructure

This job is so difficult that I think the market cannot supply the needed experts.  That deficit is becoming a forcing function where the cloud industry is being driven to adopt technologies and architectures that reduce the dependence for DevOps skills.  Now, that’s the topic for a future post!

Rethinking the “private cloud” as revealed by the Magic 8 Cube

The Magic 8 Cube

This is the first part of 3 posts that look into the real future for “private clouds.”

This concept is something that was initially developed with Greg Althaus, my colleague at Dell and then further refined in discussions with by our broader team.  It grew from my frustration with the widely referenced predictions by the Gartner Group of a private cloud explosion.  Their prognostication did not ring true to me because the economics of “public cloud” are so compelling that going private seems to be like fighting your way out of a black hole.

We’ll get to the gravity well (post 3 of 3) in due time.  For now, we need to look into the all knowing magic 8 cube.

Our breakthrough was seeing cloud hosting as a 3 dimensional problem.  We realized that we could cover all the practical cloud scenarios with these 8 cases.  Showing in the picture (right).

Here are the axis:

  1. X: Hosted vs. On-site – where are the servers running?  On-site means that they are running at your facility or in a co-lo cage that is basically an extended extraterritorial boundary of your company.
  2. Y: Shared vs. Dedicated – are other people mixing with your solution?  Shared means that your bytes are secretly nuzzling up to someone else’s bytes because you’re using a multi-tenant infrastructure.
  3. Z: Managed vs. Unmanaged – do you’re Ops people (if you have any) able to access the infrastructure that runs your applications?  Unmanaged means that you’re responsible for keeping the system operating.

With 3 axis, we have a 8 point cube.

  1. MSH – a PaaS offering in which every aspect of your application is managed and controlled.  GAE or Heroku.
  2. MSO – remember when people used to buy a mainframe and them lease off-hours extra cycles back to kids like Bill Gates?  That’s pretty much what this model means.
  3. MDH – a “mini-cloud” run by a cloud provider by dedicated to just one customer.  Dr. Evil thinks this costs one milllllllllion dollars.
  4. MDO – a cloud appliance.  You install the hardware but someone else does all the management for you.
  5. USH – IaaS.  I think that Amazon EC2 is providing USH.  It may be a service, but you’ve got to do a lot of Ops work to make your application successful.
  6. USO – OpenStack or other open source cloud DYI frameworks let a hosting provider create a shared, hosted model if they have the Ops chops to run it.
  7. UDH – Co Lo.
  8. UDO – The mythical “private cloud.”  Mine, mine, all mine.

In thinking this over, we realized that cloud customers were not likely to jump randomly around this cube.  If they were using MSH then they may want to consider MDH or MSO.  It seemed unlikely that they would go directly from MSH to UDO as Mr. Bittman suggests; however, the market is clearly willing to move directly from UDO to MSH.

We had a good old-fashioned mystery on our hands… the answer will have to wait until my next post.

Alert the villagers, it’s Frankencloud!

I’m growing more and more concerned about the preponderance of Frankencloud offerings that I see being foisted into the market place (no, my employer, Dell, is not guiltless).  Frankenclouds are “cloud solutions” that are created by using duct tape, twine, wishful marketing brochures, and at least 4 marginally cloud enabled products.

The official Frankencloud recipe goes like this:

  • Take 1 product that includes server virtualization (substitutions to VMware at your own risk)
  • Take 1 product that does storage virtualization (substitutions to SAN at your own risk)
  • Take 1 product that does network virtualization (substitutions to VLANs at your own risk)
  • Take 1 product that does IT orchestration (your guess is as good as any)
  • Take 1 product that does IT monitoring
  • Take 1 product that does Virtualization monitoring
  • Recommended: an unlimited Pizza budget for your IT Ops team

Combine the ingredients at high voltage in a climate conditioned environment.  Stir in a seriously large amounts of consulting services, training, and Red Bull.  At the end of this process, you will have your very own Frankencloud!

Frankenclouds are notoriously difficult to maintain because each part has its own version life cycle.  More critically, they also lack a brain.

Unfortunately, there are few alternatives to the Frankencloud today.  I think that the alternatives will rewrite the rules that Ops uses to create clouds.  Here are the rules that I think help drive a wooden stake through the heart of the Frankencloud (yeah, I mixed monsters):

  • not assume that server virtualization == cloud. 
  • simple, simple and simpler than that
  • focus on applications (need to write more about DevOps)
  • start with networking, not computation
  • assume that software containers are replaced, not upgraded

What do you think we can do to defeat Frankenclouds?