The seeds for Crowbar 2.0 have been in the 1.x code base for a while and were recently accelerated by SuSE. With the Dell | Cloudera 4 Hadoop and Essex OpenStack-powered releases behind us, we will now be totally focused bringing these seeds to fruition in the next two months.
Getting the core Crowbar 2.0 changes working is not a major refactoring effort in calendar time; however, it will impact current Crowbar developers by changing improving the programming APIs. The Dell Crowbar team decided to treat this as a focused refactoring effort because several important changes are tightly coupled. We cannot solve them independently without causing a larger disruption.
All of the Crowbar 2.0 changes address issues and concerns raised in the community and are needed to support expanding of our OpenStack and Hadoop application deployments.
Our technical objective for Crowbar 2.0 is to simplify and streamline development efforts as the development and user community grows. We are seeking to:
-
simplify our use of Chef and eliminate Crowbar requirements in our Opscode Chef recipes.
- reduce the initial effort required to leverage Crowbar
- opens Crowbar to a broader audience (see Upstreaming)
-
provide heterogeneous / multiple operating system deployments. This enables:
- multiple versions of the same OS running for upgrades
- different operating systems operating simultaneously (and deal with heterogeneous packaging issues)
- accommodation of no-agent systems like locked systems (e.g.: virtualization hosts) and switches (aka external entities)
- UEFI booting in Sledgehammer
-
strengthen networking abstractions
- allow networking configurations to be created dynamically (so that users are not locked into choices made before Crowbar deployment)
- better manage connected operations
- enable pull-from-source deployments that are ahead of (or forked from) available packages.
-
improvements in Crowbar’s core database and state machine to enable
- larger scale concerns
- controlled production migrations and upgrades
-
other important items
- make documentation more coupled to current features and easier to maintain
- upgrade to Rails 3 to simplify code base, security and performance
- deepen automated test coverage and capabilities
Beyond these great technical targets, we want Crowbar 2.0 is to address barriers to adoption that have been raised by our community, customers and partners. We have been tracking concerns about the learning curve for adding barclamps, complexity of networking configuration and packaging into a single ISO.
We will kick off to community part of this effort with an online review on 7/16 (details).
PS: why a refactoring?
My team at Dell does not take on any refactoring changes lightly because they are disruptive to our community; however, a convergence of requirements has made it necessary to update several core components simultaneously. Specifically, we found that desired changes in networking, operating systems, packaging, configuration management, scale and hardware support all required interlocked changes. We have been bringing many of these changes into the code base in preparation and have reached a point where the next steps require changing Crowbar 1.0 semantics.
We are first and foremost an incremental architecture & lean development team – Crowbar 2.0 will have the smallest footprint needed to begin the transformations that are currently blocking us. There is significant room during and after the refactor for the community to shape Crowbar.
Pingback: Community Participation in Crowbar 2.0 Efforts « Rob Hirschfeld's Blog
Pingback: The real workloads begin: Crowbar’s Sophomore Year « Rob Hirschfeld's Blog
Pingback: Our Vision for Crowbar – taking steps towards closed loop operations « Rob Hirschfeld's Blog
HI Rob,
I’ve been working with crowbar from past one week and was curious about having multiple ISO’s from which we can boot nodes. As of now i’m able to boot newly racked machines from crowbar admin node and i’m getting ubuntu-12.04 on them. Can i build another flavor of ubuntu and put it in already existing crowbar admin node so that i can switch the OS i want to install on the new machines.
Thanks,
Hemanth
LikeLike
Not in 1.x, but it’s a major feature for Crowbar 2. So, yes. That exactly what we have in mind. You could also boot nodes via Crowbar that we don’t end up managing.
LikeLike
Hi Rob,
One more quick question. Is there any User Guide for crowbar specific to the CLI. I thought i can work with CLI instead of going for the UI in a browser.
Thanks,
Hemanth
LikeLike
Sorry for the delay, we don’t have the CLI documented like the UI. That’s something that I’d love to see happen in the new documentation for the refactoring. Most of the CLI (in /opt/dell/bin) has help for parameters if you don’t provide valid inputs. That’s not documentation, but it’s at least some guides.
LikeLike
Pingback: How OpenStack installer (crowbar + chefops) works (video from 3/14 demo) « Rob Hirschfeld's Blog
Is there a plan to implement Crowbar Admin node backup. So that if Admin node crashes we reinsert a new admin node which already has knowledge about the existing openstack cloud and gets to work immediately.
LikeLike
Yes. The objective for Crowbar 2.0 work is to enable features just like that. It all comes down to the database – In Crowbar 1.0, you could setup Chef in HA and have the same benefit. In Crowbar 2.0, we’re talking about HA databases and automatic backups. Redundancy & reliability are top of mind features.
LikeLike
Pingback: 5 things keeping DevOps from playing well with others (Chef, Crowbar and Upstream Patterns) | Rob Hirschfeld's Blog
Pingback: OpenStack Day Tokyo 2013に参加しました - OpenStackとCrowbar - テックセンター - Blog - テックセンター - Dell コミュニティ
Pingback: Crowbar and our Pivot (or, how we slipped and shipped Grizzly) | Rob Hirschfeld
Pingback: Connecting the dots: Dell stays course on OpenStack private | Rob Hirschfeld
Pingback: OpenStack Board Elections: What I’ll do in 2014: DefCore, Ops, & Community | Rob Hirschfeld
Pingback: OpenCrowbar reaches critical milestone – boot, discover and forge on! | Rob Hirschfeld
Pingback: Introducing Digital Rebar. Building strong foundations for New Stack infrastructure | Rob Hirschfeld