OpenStack automated high-availability deploy reality, SUSE shows off chops with Crowbar

While I’ve been focused on delivering next-generation kick-aaS-i-ness with Crowbar v2 (now called OpenCrowbar) and helping the Dell and Red Hat co-engineer a OpenStack Powered Cloud, SUSE has been continuing to expand and polish the OpenStack deployment on Crowbar v1.  I’m always impressed by commit activity (SUSE is the top committer in the Crowbar project) and was excited to see their Havana launch announcement.

Using Crowbar v1, SUSE is delivering a seriously robust automated OpenStack Havana implementation.  They have taken the time to build high availability (HA) across the framework including for Neutron, Heat and Ceilometer.

As an OpenStack Foundation board member, I hear a lot of hand-wringing in the community about ops practices and asking “is OpenStack is ready for the enterprise?”  While I’m not sure how to really define “enterprise,” I do know that SUSE Cloud Havana release version also) shows that it’s possible to deliver a repeatable and robust OpenStack deployment.

This effort shows some serious DevOps automation chops and, since Crowbar is open, everyone in the community can benefit from their tuning.   Of course, I’d love to see these great capabilities migrate into the very active StackForge Chef OpenStack cookbooks that OpenCrowbar is designed to leverage.

Creating HA automation is a great achievement and an important milestone in capturing the true golden fleece – automated release-to-release upgrades.  We built the OpenCrowbar annealer with this objective in mind and I feel like it’s within reach.

SUSE Cloud powered by OpenStack > Demo using Crowbar

OpenStack booth at HostingConAs much as I love talking about Crowbar and OpenStack, it’s even more fun to cheer for other people doing it!

SUSE’s be a great development partner for Crowbar and an active member of the OpenStack community.  I’m excited to see them giving a live demo today about their OpenStack technology stack (which includes Crowbar and Ceph).

Register for the Live Demo on Wed 06-26-2013 at 3.00 – 4.00 pm GMT to “learn about SUSE’s OpenStack distribution: SUSE Cloud with Dell Crowbar as the deployment mechanism and advanced features such as Ceph unified storage platform for object, block and file storage in the cloud.”

The presenter, Rick Ashford, lives in Austin and is a regular at the OpenStack Austin Meetups.  He has been working with Linux and open-source software since 1998 and currently specializes in the OpenStack cloud platform and the SUSE ecosystem surrounding it.

Crowbar 2.0 Design Summit Notes (+ open weekly meetings starting)

I could not be happier with the results Crowbar collaborators and my team at Dell achieved around the 1st Crowbar design summit. We had great discussions and even better participation.

The attendees represented major operating system vendors, configuration management companies, OpenStack hosting companies, OpenStack cloud software providers, OpenStack consultants, OpenStack private cloud users, and (of course) a major infrastructure provider. That’s a very complete cross-section of the cloud community.

I knew from the start that we had too little time and, thankfully, people were tolerant of my need to stop the discussions. In the end, we were able to cover all the planned topics. This was important because all these features are interlocked so discussions were iterative. I was impressed with the level of knowledge at the table and it drove deep discussion. Even so, there are still parts of Crowbar that are confusing (networking, late binding, orchestration, chef coupling) even to collaborators.

In typing up these notes, it becomes even more blindingly obvious that the core features for Crowbar 2 are highly interconnected. That’s no surprise technically; however, it will make the notes harder to follow because of knowledge bootstrapping. You need take time and grok the gestalt and surf the zeitgeist.

Collaboration Invitation: I wanted to remind readers that this summit was just the kick-off for a series of open weekly design (Tuesdays 10am CDT) and coordination (Thursdays 8am CDT) meetings. Everyone is welcome to join in those meetings – information is posted, recorded, folded, spindled and mutilated on the Crowbar 2 wiki page.

These notes are my reflection of the online etherpad notes that were made live during the meeting. I’ve grouped them by design topic.

Introduction

  • Contributors need to sign CLAs
  • We are refactoring Crowbar at this time because we have a collection of interconnected features that could not be decoupled
  • Some items (Database use, Rails3, documentation, process) are not for debate. They are core needs but require little design.
  • There are 5 key topics for the refactor: online mode, networking flexibility, OpenStack pull from source, heterogeneous/multi operating systems, being CDMB agnostic
  • Due to time limits, we have to stop discussions and continue them online.
  • We are hoping to align Crowbar 2 beta and OpenStack Folsom release.

Online / Connected Mode

  • Online mode is more than simply internet connectivity. It is the foundation of how Crowbar stages dependencies and components for deploy. It’s required for heterogeneous O/S, pull from source and it has dependencies on how we model networking so nodes can access resources.
  • We are thinking to use caching proxies to stage resources. This would allow isolated production environments and preserves the run everything from ISO without a connection (that is still a key requirement to us).
  • Suse’s Crowbar fork does not build an ISO, instead it relies on RPM packages for barclamps and their dependencies.
  • Pulling packages directly from the Internet has proven to be unreliable, this method cannot rely on that alone.

Install From Source

  • This feature is mainly focused on OpenStack, it could be applied more generally. The principals that we are looking at could be applied to any application were the source code is changing quickly (all of them?!). Hadoop is an obvious second candidate.
  • We spent some time reviewing the use-cases for this feature. While this appears to be very dev and pre-release focused, there are important applications for production. Specifically, we expect that scale customers will need to run ahead of or slightly adjacent to trunk due to patches or proprietary code. In both cases, it is important that users can deploy from their repository.
  • We discussed briefly our objective to pull configuration from upstream (not just OpenStack, but potentially any common cookbooks/modules). This topic is central to the CMDB agnostic discussion below.
  • The overall sentiment is that this could be a very powerful capability if we can manage to make it work. There is a substantial challenge in tracking dependencies – current RPMs and Debs do a good job of this and other configuration steps beyond just the bits. Replicating that functionality is the real obstacle.

CMDB agnostic (decoupling Chef)

  • This feature is confusing because we are not eliminating the need for a configuration management database (CMDB) tool like Chef, instead we are decoupling Crowbar from the a single CMDB to a pluggable model using an abstraction layer.
  • It was stressed that Crowbar does orchestration – we do not rely on convergence over multiple passes to get the configuration correct.
  • We had strong agreement that the modules should not be tightly coupled but did need a consistent way (API? Consistent namespace? Pixie dust?) to share data between each other. Our priority is to maintain loose coupling and follow integration by convention and best practices rather than rigid structures.
  • The abstraction layer needs to have both import and export functions
  • Crowbar will use attribute injection so that Cookbooks can leverage Crowbar but will not require Crowbar to operate. Crowbar’s database will provide the links between the nodes instead of having to wedge it into the CMDB.
  • In 1.x, the networking was the most coupled into Chef. This is a major part of the refactor and modeling for Crowbar’s database.
  • There are a lot of notes captured about this on the etherpad – I recommend reviewing them

Heterogeneous OS (bare metal provisioning and beyond)

  • This topic was the most divergent of all our topics because most of the participants were using some variant of their own bare metal provisioning project (check the etherpad for the list).
  • Since we can’t pack an unlimited set of stuff on the ISO, this feature requires online mode.
  • Most of these projects do nothing beyond OS provisioning; however, their simplicity is beneficial. Crowbar needs to consider users who just want a stream-lined OS provisioning experience.
  • We discussed Crowbar’s late binding capability, but did not resolve how to reconcile that with these other projects.
  • Critical use cases to consider:
    • an API for provisioning (not sure if it needs to be more than the current one)
    • pick which Operating Systems go on which nodes (potentially with a rules engine?)
    • inventory capabilities of available nodes (like ohai and factor) into a database
    • inventory available operating systems

A SuPEr New Linux for Crowbar! SuSE shows off port and OpenStack deploy

During last week’s OpenStack Essex Deploy Day, we featured several OpenStack ecosystem presentations including SuSE, Morphlabs, enStratus, Opscode, and Inktank (Ceph).

SuSE’s presentation (video) was deploying OpenStack using a SuSE port of Crowbar (including a reskinned UI)!

This is a significant for SuSE and Crowbar:

  1. SuSE, a platinum member of the OpenStack foundation, now has an OpenStack Essex distribution. They are offering this deployment as an on-request beta.
  2. Crowbar is now demonstrable operating on the three top Linux distributions.

SuSE is advancing some key architectural proposals for Crowbar because their implementation downloads Crowbar as a package rather than bundling everything into an ISO.

With the Hadoop 4 & OpenStack Essex releases nearly put to bed, it’s time to bring some of this great innovation into the Crowbar trunk.

Austin OpenStack Meetup (January Minutes) + OpenStack Foundation Web Cast!

Sorry for the brevity… At the last Austin OpenStack meetup, we had >60 stackers!  Some from as far away as Portland and Boston (as in Oregon and Massachusetts).

Notes:

  • Suse introduced their OpenStack beta and talked about their Suse Studio that can deploy images against the OpenStack APIs
  • I showed off DevStack.org code that can setup the truck of OpenStack (now Essex) in about 10 minutes on a single node.  Great for developers!
  • I showed an OpenStack Diablo Final deployment from Crowbar.  I focused mainly on Dashboard and used our reference architecture (see below) as illustration of the many parts.
  • Matt Ray suggested everyone watch the webcasts about the OpenStack Foundation (Thurs 6pm central  & Friday 9am central)
  • We planned the next few meetups.
    • For February, we’ll talk about Swift and Dashboard.
    • For March, we’ll talk about Essex and DevStack to prep for the next design summit (in SF).
    • For April, we’ll debrief the conference

Thank you Suse and Dell (my employer) for sponsoring!   The next meetup is sponsored by Canonical.