Crowbar 2.0 Design Summit Notes (+ open weekly meetings starting)

I could not be happier with the results Crowbar collaborators and my team at Dell achieved around the 1st Crowbar design summit. We had great discussions and even better participation.

The attendees represented major operating system vendors, configuration management companies, OpenStack hosting companies, OpenStack cloud software providers, OpenStack consultants, OpenStack private cloud users, and (of course) a major infrastructure provider. That’s a very complete cross-section of the cloud community.

I knew from the start that we had too little time and, thankfully, people were tolerant of my need to stop the discussions. In the end, we were able to cover all the planned topics. This was important because all these features are interlocked so discussions were iterative. I was impressed with the level of knowledge at the table and it drove deep discussion. Even so, there are still parts of Crowbar that are confusing (networking, late binding, orchestration, chef coupling) even to collaborators.

In typing up these notes, it becomes even more blindingly obvious that the core features for Crowbar 2 are highly interconnected. That’s no surprise technically; however, it will make the notes harder to follow because of knowledge bootstrapping. You need take time and grok the gestalt and surf the zeitgeist.

Collaboration Invitation: I wanted to remind readers that this summit was just the kick-off for a series of open weekly design (Tuesdays 10am CDT) and coordination (Thursdays 8am CDT) meetings. Everyone is welcome to join in those meetings – information is posted, recorded, folded, spindled and mutilated on the Crowbar 2 wiki page.

These notes are my reflection of the online etherpad notes that were made live during the meeting. I’ve grouped them by design topic.


  • Contributors need to sign CLAs
  • We are refactoring Crowbar at this time because we have a collection of interconnected features that could not be decoupled
  • Some items (Database use, Rails3, documentation, process) are not for debate. They are core needs but require little design.
  • There are 5 key topics for the refactor: online mode, networking flexibility, OpenStack pull from source, heterogeneous/multi operating systems, being CDMB agnostic
  • Due to time limits, we have to stop discussions and continue them online.
  • We are hoping to align Crowbar 2 beta and OpenStack Folsom release.

Online / Connected Mode

  • Online mode is more than simply internet connectivity. It is the foundation of how Crowbar stages dependencies and components for deploy. It’s required for heterogeneous O/S, pull from source and it has dependencies on how we model networking so nodes can access resources.
  • We are thinking to use caching proxies to stage resources. This would allow isolated production environments and preserves the run everything from ISO without a connection (that is still a key requirement to us).
  • Suse’s Crowbar fork does not build an ISO, instead it relies on RPM packages for barclamps and their dependencies.
  • Pulling packages directly from the Internet has proven to be unreliable, this method cannot rely on that alone.

Install From Source

  • This feature is mainly focused on OpenStack, it could be applied more generally. The principals that we are looking at could be applied to any application were the source code is changing quickly (all of them?!). Hadoop is an obvious second candidate.
  • We spent some time reviewing the use-cases for this feature. While this appears to be very dev and pre-release focused, there are important applications for production. Specifically, we expect that scale customers will need to run ahead of or slightly adjacent to trunk due to patches or proprietary code. In both cases, it is important that users can deploy from their repository.
  • We discussed briefly our objective to pull configuration from upstream (not just OpenStack, but potentially any common cookbooks/modules). This topic is central to the CMDB agnostic discussion below.
  • The overall sentiment is that this could be a very powerful capability if we can manage to make it work. There is a substantial challenge in tracking dependencies – current RPMs and Debs do a good job of this and other configuration steps beyond just the bits. Replicating that functionality is the real obstacle.

CMDB agnostic (decoupling Chef)

  • This feature is confusing because we are not eliminating the need for a configuration management database (CMDB) tool like Chef, instead we are decoupling Crowbar from the a single CMDB to a pluggable model using an abstraction layer.
  • It was stressed that Crowbar does orchestration – we do not rely on convergence over multiple passes to get the configuration correct.
  • We had strong agreement that the modules should not be tightly coupled but did need a consistent way (API? Consistent namespace? Pixie dust?) to share data between each other. Our priority is to maintain loose coupling and follow integration by convention and best practices rather than rigid structures.
  • The abstraction layer needs to have both import and export functions
  • Crowbar will use attribute injection so that Cookbooks can leverage Crowbar but will not require Crowbar to operate. Crowbar’s database will provide the links between the nodes instead of having to wedge it into the CMDB.
  • In 1.x, the networking was the most coupled into Chef. This is a major part of the refactor and modeling for Crowbar’s database.
  • There are a lot of notes captured about this on the etherpad – I recommend reviewing them

Heterogeneous OS (bare metal provisioning and beyond)

  • This topic was the most divergent of all our topics because most of the participants were using some variant of their own bare metal provisioning project (check the etherpad for the list).
  • Since we can’t pack an unlimited set of stuff on the ISO, this feature requires online mode.
  • Most of these projects do nothing beyond OS provisioning; however, their simplicity is beneficial. Crowbar needs to consider users who just want a stream-lined OS provisioning experience.
  • We discussed Crowbar’s late binding capability, but did not resolve how to reconcile that with these other projects.
  • Critical use cases to consider:
    • an API for provisioning (not sure if it needs to be more than the current one)
    • pick which Operating Systems go on which nodes (potentially with a rules engine?)
    • inventory capabilities of available nodes (like ohai and factor) into a database
    • inventory available operating systems

Rackspace unveils OpenStack reference architecture & private cloud offering

Yesterday, Rackspace Cloud Builders unveiled both their open reference architecture (RA) and a private cloud offering (on GigaOM) based upon the RA.  The RA (which is well aligned with our Dell OpenStack RA) does a good job laying out the different aspects of an OpenStack deployment.  It also calls for the use of Dell C6100 servers and the open source version of Crowbar.

The Rackspace RA and Crowbar deployment barclamps share the same objective: sharing of best practices for OpenStack operations.

Over the last 12+ months, my team at Dell has had the opportunity to work with many customers on OpenStack deployment designs.  While no two of these are identical, they do share many similarities.  We are pleased to collaborate with Rackspace and others on capturing these practices as operational code (or “opscode” if you want a reference to the Chef cookbooks that are an intrinsic part of Crowbar’s architecture).

In our customer interactions, we hear clearly that Crowbar must remain flexible and ready to adapt to both customer on-site requirements and evolution within the OpenStack code base.  You are also telling us that there is a broader application space for Crowbar and we are listening to that too.

I believe that it will take some time for the community and markets to process today’s Rackspace announcements.  Rackspace is showing strong leadership in both sharing information and commercialization around OpenStack.  Both of these actions will drive responses from the community members.

Notes from 2011 Cloud Connect Event Day 2 (#ccevent)

With the OpenStack launch behind me, I have some time to attend the Cloud Connect Event.  I missed all the DevOps sessions, but was getting to geek out on the NoSQL & Big Data sessions.   I jumped to the private cloud track (based on Twitter traffic) and was rewarded for the shift.

I’m surprised at how much focus this cloud conference is dedicated to private cloud.  At other cloud conferences I’ve attended, the focus has been on learning how to use the cloud (specifically the public cloud).  This is the first cloud show I’ve attended that has so much emphasis, dialog and vendor feeding around private.  This was a suits & slacks show with few jeans, t-shirts, and pony tails.  Perhaps private cloud is where the $$$ is being spent now?

It definitely feels like using cloud has become assumed, but the best practices and tools are just emerging.

The twitter #ccevent stream is interesting but temporal.  I’m posting my raw (spelling optional) notes (below the more tag) because there is a lot of great content from the show to support and extend the twitter stream.  I’ll try to italicize some of the better lines.

Continue reading

Cloud Gravity & Shards

This post is the final post laying out a rethinking of how we view user and buyer motivations for public and private clouds.

In part 1, I laid out the “magic cube” that showed a more discrete technological breakdown of cloud deployments (see that for the MSH, MDH, MDO, UDO key).  In part 2, I piled higher and deeper business vectors onto the cube showing that the cost value of the vertices was not linear.  The costs were so unequal that they pulled our nice isometric cube into a cone.

The Cloud Gravity Well

To help make sense of cloud gravity, I’m adding a qualitative measure of friction.

Friction represents the cloud consumer’s willingness to adopt the requirements of our cloud vertices.  I commonly hear people say they are not willing to put sensitive data “in the cloud” or they are worried about a “lack of security.”  These practical concerns create significant friction against cloud adoption; meanwhile, being able to just “throw up” servers (yuck!) and avoiding IT restrictions make it easy (low friction) to use clouds.

Historically, it was easy to plot friction vs. cost.  There was a nice linear trend where providers simply lowered cost to overcome friction.  This has been fueling the current cloud boom.

The magic cube analysis shows another dynamic emerging because of competing drivers from management and isolation.  The dramatic saving from outsource management are inhibited by the high friction for giving up data protection, isolation, control, and performance minimums.  I believe that my figure, while pretty, dramatically understates the friction gap between dedicated and shared hosting.  This tension creates a non-linear trend in which substantial customer traction will follow the more expensive offerings.  In fact, it may be impossible to overcome this friction with pricing pressure.

I believe this analysis shows that there’s a significant market opportunity for clouds that have dedicated resources yet are still managed and hosted by a trusted 3rd party.  On the other hand, this gravity well could turn out to be a black hole money pit.  Like all cloud revolutions, the timid need not apply.

Post Script: Like any marketing trend, there must be a name.  These clouds are not “private” in the conventional sense and I cringe at using “hybrid” for anything anymore.  If current public clouds are like hotels (or hostels) then these clouds are more like condos or managed community McMansions.  I think calling them “cloud shards” is very crisp, but my marketing crystal ball says “try again.”  Suggestions?

Cloud Business Vectors

In part 1 of this series, I laid out the “magic cube” that describes 8 combinations for cloud deployment.  The cube provides a finer grain understanding than “public” vs. “private” clouds because we can now clearly see that there are multiple technology axis that create “private IT” that can be differentiated. 

 The axis are: (detailed explanation)

  • X. Location: Hosted vs. On-site
  • Y. Isolation: Shared vs. Dedicated
  • Z. Management: Managed vs. Unmanaged

Cloud Cost Model

In this section, we take off our technologist pocket protectors and pick up our finance abacus.  The next level of understanding for the magic cube is to translate the technology axis into business vectors.  The business vectors are:

X. Capitalization:  OpEx vs. CapEx.  On the surface, this determines who has ownership of resource, but the deeper issue is if the resource is an investment (capital) or consumable (operations).   Unless you’re talking about a co-lo cage, hosting models will be consumable because the host is leveraging (in the financial sense) their capital into operating revenue.

Y. Commitment: PayGo vs. Fixed.  Like a cell phone plan, you can pay for what you use (Pay-as-you-go) or lock-in to a fixed contract.  Fixed is generally pays a premium based on volume even though the per unit cost may be lower.  In my thinking, the fixed contract may include dedicated resource guarantees and additional privacy.

Z. Management: Insource vs. Outsource.  Don’t over think this vector, but remember I not talking about off shoring!  If you are directly paying people to manage your cloud then you’ve insourced management.  If the host provides services, process or automation that reduces hiring requirements then you’re outsourcing it IT.  It’s critical to realize that you can’t employee fractional people.  There are fundamental cloud skillsets and tools that must be provided to operate a cloud (including, not limited to DevOps).


If you were willing to do some cerebral calisthenics about these vectors then you realized that they are not equal cost weights.  Let’s look at them from least to most.

  1. The commitment vector is very easy to traverse this vector in either direction.  It’s well established human behavior that we’ll pay more for to be more predictable, especially if that means we get more control or privacy.  If I had had a dollar for everyone who swoons over cloud bursting I’d go buy that personal jet pack.
  2. The capitalization vector has is part of the driver to cloud as companies (and individuals) seek to avoid buying servers up front.  It also helps that clouds let you buy factional servers and “throw away” servers that you don’t need.  While these OpEx aspects of cloud are nice, servers are really not that expensive to lease or idle.  Frankly, it’s the deployment and management of those assets that drives the TCO way up for CapEx clouds, but that’s not this vector so move along.
  3. The management vector is the silverback gorilla standing in the corner of our magic cube.  Acquiring and maintaining the operations expertise is a significant ongoing expense.  In many cases, companies simply cannot afford to adequately cover the needed skills and this severely limits their ability to use technology competitively.  Hosts are much better positioned to manage cloud infrastructure because they enjoy economies of scale distributed between multiple customers.  This vector is heavily one directional – once you fly without that critical Ops employee in favor of a host doing the work, it is unlikely you’ll hire that role.

The unequal cost weights pull our cube out of shape.  They create a strong customer pull away from the self-managed & CapEx vertices and towards outsourced & OpEx.  I think of this distortion as a cloud gravity well that pulls customers down from private into public clouds. 

That’s enough for today.  You’ll have to wait for the gravity well analysis in part 3.