Using Crowbar v1, SUSE is delivering a seriously robust automated OpenStack Havana implementation. They have taken the time to build high availability (HA) across the framework including for Neutron, Heat and Ceilometer.
As an OpenStack Foundation board member, I hear a lot of hand-wringing in the community about ops practices and asking “is OpenStack is ready for the enterprise?” While I’m not sure how to really define “enterprise,” I do know that SUSE Cloud Havana release version also) shows that it’s possible to deliver a repeatable and robust OpenStack deployment.
This effort shows some serious DevOps automation chops and, since Crowbar is open, everyone in the community can benefit from their tuning. Of course, I’d love to see these great capabilities migrate into the very active StackForge Chef OpenStack cookbooks that OpenCrowbar is designed to leverage.
Creating HA automation is a great achievement and an important milestone in capturing the true golden fleece – automated release-to-release upgrades. We built the OpenCrowbar annealer with this objective in mind and I feel like it’s within reach.
Overall, I’m happy with our three days of hacking on Crowbar 2. We’ve reached the critical “deploys workload” milestone and I’m excited about well the design is working and how clearly we’ve been able to articulate our approach in code & UI.
Of course, it’s worth noting again that Crowbar 1 has also had significant progress on OpenStack Havana workloads running on Ubuntu, Centos/RHEL, and SUSE/SLES
Here are the focus items from the hack:
Documentation – cleaned up documentation specifically by updating the README in all the projects to point to the real documentation in an effort to help people find useful information faster. Reminder: if unsure, put documentation in barclamp-crowbar/doc!
Docker Integration for Crowbar 2 progress. You can now install Docker from internal packages on an admin node. We have a strategy for allowing containers be workload nodes.
Ceph installed as workload is working. This workload revealed the need for UI improvements and additional flags for roles (hello “cluster”)
Progress on OpenSUSE and Fedora as Crowbar 2 install targets. This gets us closer to true multi-O/S support.
OpenSUSE 13.1 setup as a dev environment including tests. This is a target working environment.
Being 12 hours offset from the US really impacted remote participation.
One thing that became obvious during the hack is that we’ve reached a point in Crowbar 2 development where it makes sense to move the work into distinct repositories. There are build, organization and packaging changes that would simplify Crowbar 2 and make it easier to start using; however, we’ve been trying to maintain backwards compatibility with Crowbar 1. This is becoming impossible; consequently, it appears time to split them. Here are some items for consideration:
Crowbar 2 could collect barclamps into larger “workload” repos so there would be far fewer repos (although possibly still barclamps within a workload). For example, there would be a “core” set that includes all the current CB2 barclamps. OpenStack, Ceph and Hadoop would be their own sets.
Crowbar 2 would have a clearly named “build” or “tools” repo instead of having it called “crowbar”
Crowbar 2 framework would be either part of “core” or called “framework”
We would put these in a new organization (“Crowbar2″ or “Crowbar-2″) so that the clutter of Crowbar’s current organization is avoided.
While we clearly need to break apart the repo, this suggestion needs community more discussion!
The Crowbar community has a tradition of “day zero ops” community support for the latest OpenStack release at the summit using our pull-from-source capability. This release we’ve really gone the extra mile by doing it one THREE Linux distros (Ubuntu, RHEL & SLES) in parallel with a significant number of projects and new capabilities included.
I’m especially excited about Crowbar implementation of Havana Docker support which required advanced configuration with Nova and Glance. The community also added Heat and Celiometer in the last release cycle plus High Availability (“Titanium”) deployment work is in active development. Did I mention that Crowbar is rocking OpenStack deployments? No, because it’s redundant to mention that. We’ll upload ISOs of this work for easy access later in the week.
While my team at Dell remains a significant contributor to this work, I’m proud to point out to SUSE Cloud leadership and contributions also (including the new Ceph barclamp & integration). Crowbar has become a true multi-party framework!
Want to learn more? If you’re in Hong Kong, we are hosting a Crowbar Developer Community Meetup on Monday, November 4, 2013, 9:00 AM to 12:00 PM (HKT) in the SkyCity Marriott SkyZone Meeting Room. Dell, dotCloud/Docker, SUSE and others will lead a lively technical session to review and discuss the latest updates, advantages and future plans for the Crowbar Operations Platform. You can expect to see some live code demos, and participate in a review of the results of a recent Crowbar 2 hackathon. Confirm your seat here – space is limited! (I expect that we’ll also stream this event using Google Hangout, watch Twitter #Crowbar for the feed)
My team at Dell has a significant presence at the OpenStack Summit in Hong Kong (details about activities including sponsored parties). Be sure to seek out my fellow OpenStack Board Member Joseph George, Dell OpenStack Product Manager Kamesh Pemmaraju and Enstratius/Dell Multi-Cloud Manager Founder George Reese.
Note: The work referenced in this post is about Crowbar v1. We’ve also reached critical milestones with Crowbar v2 and will begin implementing Havana on that platform shortly.
As much as I love talking about Crowbar and OpenStack, it’s even more fun to cheer for other people doing it!
SUSE’s be a great development partner for Crowbar and an active member of the OpenStack community. I’m excited to see them giving a live demo today about their OpenStack technology stack (which includes Crowbar and Ceph).
Register for the Live Demo on Wed 06-26-2013 at 3.00 – 4.00 pm GMT to “learn about SUSE’s OpenStack distribution: SUSE Cloud with Dell Crowbar as the deployment mechanism and advanced features such as Ceph unified storage platform for object, block and file storage in the cloud.”
The presenter, Rick Ashford, lives in Austin and is a regular at the OpenStack Austin Meetups. He has been working with Linux and open-source software since 1998 and currently specializes in the OpenStack cloud platform and the SUSE ecosystem surrounding it.
If you are coming to the OpenStack summit in San Diego next week then please find me at the show! I want to hear from you about the Foundation, community, OpenStack deployments, Crowbar and anything else. Oh, and I just ordered a handful of Crowbar stickers if you wanted some CB bling.
I could not be happier with the results Crowbar collaborators and my team at Dell achieved around the 1st Crowbar design summit. We had great discussions and even better participation.
The attendees represented major operating system vendors, configuration management companies, OpenStack hosting companies, OpenStack cloud software providers, OpenStack consultants, OpenStack private cloud users, and (of course) a major infrastructure provider. That’s a very complete cross-section of the cloud community.
I knew from the start that we had too little time and, thankfully, people were tolerant of my need to stop the discussions. In the end, we were able to cover all the planned topics. This was important because all these features are interlocked so discussions were iterative. I was impressed with the level of knowledge at the table and it drove deep discussion. Even so, there are still parts of Crowbar that are confusing (networking, late binding, orchestration, chef coupling) even to collaborators.
In typing up these notes, it becomes even more blindingly obvious that the core features for Crowbar 2 are highly interconnected. That’s no surprise technically; however, it will make the notes harder to follow because of knowledge bootstrapping. You need take time and grok the gestalt and surf the zeitgeist.
Collaboration Invitation: I wanted to remind readers that this summit was just the kick-off for a series of open weekly design (Tuesdays 10am CDT) and coordination (Thursdays 8am CDT) meetings. Everyone is welcome to join in those meetings – information is posted, recorded, folded, spindled and mutilated on the Crowbar 2 wiki page.
These notes are my reflection of the online etherpad notes that were made live during the meeting. I’ve grouped them by design topic.
We are refactoring Crowbar at this time because we have a collection of interconnected features that could not be decoupled
Some items (Database use, Rails3, documentation, process) are not for debate. They are core needs but require little design.
There are 5 key topics for the refactor: online mode, networking flexibility, OpenStack pull from source, heterogeneous/multi operating systems, being CDMB agnostic
Due to time limits, we have to stop discussions and continue them online.
We are hoping to align Crowbar 2 beta and OpenStack Folsom release.
Online / Connected Mode
Online mode is more than simply internet connectivity. It is the foundation of how Crowbar stages dependencies and components for deploy. It’s required for heterogeneous O/S, pull from source and it has dependencies on how we model networking so nodes can access resources.
We are thinking to use caching proxies to stage resources. This would allow isolated production environments and preserves the run everything from ISO without a connection (that is still a key requirement to us).
Suse’s Crowbar fork does not build an ISO, instead it relies on RPM packages for barclamps and their dependencies.
Pulling packages directly from the Internet has proven to be unreliable, this method cannot rely on that alone.
Install From Source
This feature is mainly focused on OpenStack, it could be applied more generally. The principals that we are looking at could be applied to any application were the source code is changing quickly (all of them?!). Hadoop is an obvious second candidate.
We spent some time reviewing the use-cases for this feature. While this appears to be very dev and pre-release focused, there are important applications for production. Specifically, we expect that scale customers will need to run ahead of or slightly adjacent to trunk due to patches or proprietary code. In both cases, it is important that users can deploy from their repository.
We discussed briefly our objective to pull configuration from upstream (not just OpenStack, but potentially any common cookbooks/modules). This topic is central to the CMDB agnostic discussion below.
The overall sentiment is that this could be a very powerful capability if we can manage to make it work. There is a substantial challenge in tracking dependencies – current RPMs and Debs do a good job of this and other configuration steps beyond just the bits. Replicating that functionality is the real obstacle.
CMDB agnostic (decoupling Chef)
This feature is confusing because we are not eliminating the need for a configuration management database (CMDB) tool like Chef, instead we are decoupling Crowbar from the a single CMDB to a pluggable model using an abstraction layer.
It was stressed that Crowbar does orchestration – we do not rely on convergence over multiple passes to get the configuration correct.
We had strong agreement that the modules should not be tightly coupled but did need a consistent way (API? Consistent namespace? Pixie dust?) to share data between each other. Our priority is to maintain loose coupling and follow integration by convention and best practices rather than rigid structures.
The abstraction layer needs to have both import and export functions
Crowbar will use attribute injection so that Cookbooks can leverage Crowbar but will not require Crowbar to operate. Crowbar’s database will provide the links between the nodes instead of having to wedge it into the CMDB.
In 1.x, the networking was the most coupled into Chef. This is a major part of the refactor and modeling for Crowbar’s database.
There are a lot of notes captured about this on the etherpad – I recommend reviewing them
Heterogeneous OS (bare metal provisioning and beyond)
This topic was the most divergent of all our topics because most of the participants were using some variant of their own bare metal provisioning project (check the etherpad for the list).
Since we can’t pack an unlimited set of stuff on the ISO, this feature requires online mode.
Most of these projects do nothing beyond OS provisioning; however, their simplicity is beneficial. Crowbar needs to consider users who just want a stream-lined OS provisioning experience.