I was wrong: I underestimated how fast these issues could be addressed.
The Kubernetes Helm work out of the AT&T Comm Dev lab takes on the integration with a “do it the K8s native way” approach that the RackN team finds very effective. In fact, we’ve created a fully integrated Digital Rebar deployment that lays down Kubernetes using Kargo and then adds OpenStack via Helm. The provisioning automation includes a Ceph cluster to provide stateful sets for data persistence.
This joint approach dramatically reduces operational challenges associated with running OpenStack without taking over a general purpose Kubernetes infrastructure for a single task.
Given the rise of SRE thinking, the RackN team believes that this approach changes the field for OpenStack deployments and will ultimately dominate the field (which is already mainly containerized). There is still work to be completed: some complex configuration is required to allow both Kubernetes CNI and Neutron to collaborate so that containers and VMs can cross-communicate.
We are looking for companies that want to join in this work and fast-track it into production. If this is interesting, please contact us at email@example.com.
Why should you sponsor? Current OpenStack operators facing “fork-lift upgrades” should want to find a path like this one that ensures future upgrades are baked into the plan. This approach provide a fast track to a general purpose, enterprise grade, upgradable Kubernetes infrastructure.
Closing note from my past presentations: We’re making progress on the technical aspects of this integration; however, my concerns about market positioning remain.
You can go from nothing to a distributed Ceph cluster in an hour. Need to rehearse on VMs? That’s even faster. Want to test and retune your configuration? Make some changes, take a coffee break and retest. Of course, with redeploy that fast, you can iterate until you’ve got it exactly right.
2. Automatically Optimized Disc Configuration
The RackN update optimizes the Ceph installation for disk performance by finding and flagging SSDs. That means that our deploy just works(tm) without you having to reconfigure your OS provisioning scripts or vendor disk layout.
3. Cluster Building and Balancing
This update allows you to place which roles you want on which nodes before you commit to the deployment. You can decide the right monitor to OSD/MON ratio for your needs. If you expand your cluster, the system will automatically rebalance the cluster.
4. Advanced Networking Topology & IPv6
Using the network conduit abstraction, you can separate front and back end networks for the cluster. We also take advantage of native IPv6 support and even use that as the preferred addressing.
Ceph is the leading open source block storage back-end for OpenStack; however, it’s tricky to install and few vendors invest the effort to hardware optimize their configuration. Like any foundation layer, configuration or performance errors in the storage layer will impact the entire system. Further, the Ceph infrastructure needs to be built before OpenStack is installed.
OpenCrowbar was designed to deploy platforms like Ceph. It has detailed knowledge of the physical infrastructure and sufficient orchestration to synchronize Ceph Mon cluster bring-up.
We are only at the start of the Ceph install journey. Today, you can use the open source components to bring up a Ceph cluster in a reliable way that works across hardware vendors. Much remains to optimize and tune this configuration to take advantage of SSDs, non-Centos environments and more.
We’d love to work with you to tune and extend this workload! Please join us in the OpenCrowbar community.
Overall, I’m happy with our three days of hacking on Crowbar 2. We’ve reached the critical “deploys workload” milestone and I’m excited about well the design is working and how clearly we’ve been able to articulate our approach in code & UI.
Of course, it’s worth noting again that Crowbar 1 has also had significant progress on OpenStack Havana workloads running on Ubuntu, Centos/RHEL, and SUSE/SLES
Here are the focus items from the hack:
Documentation – cleaned up documentation specifically by updating the README in all the projects to point to the real documentation in an effort to help people find useful information faster. Reminder: if unsure, put documentation in barclamp-crowbar/doc!
Docker Integration for Crowbar 2 progress. You can now install Docker from internal packages on an admin node. We have a strategy for allowing containers be workload nodes.
Ceph installed as workload is working. This workload revealed the need for UI improvements and additional flags for roles (hello “cluster”)
Progress on OpenSUSE and Fedora as Crowbar 2 install targets. This gets us closer to true multi-O/S support.
OpenSUSE 13.1 setup as a dev environment including tests. This is a target working environment.
Being 12 hours offset from the US really impacted remote participation.
One thing that became obvious during the hack is that we’ve reached a point in Crowbar 2 development where it makes sense to move the work into distinct repositories. There are build, organization and packaging changes that would simplify Crowbar 2 and make it easier to start using; however, we’ve been trying to maintain backwards compatibility with Crowbar 1. This is becoming impossible; consequently, it appears time to split them. Here are some items for consideration:
Crowbar 2 could collect barclamps into larger “workload” repos so there would be far fewer repos (although possibly still barclamps within a workload). For example, there would be a “core” set that includes all the current CB2 barclamps. OpenStack, Ceph and Hadoop would be their own sets.
Crowbar 2 would have a clearly named “build” or “tools” repo instead of having it called “crowbar”
Crowbar 2 framework would be either part of “core” or called “framework”
We would put these in a new organization (“Crowbar2” or “Crowbar-2”) so that the clutter of Crowbar’s current organization is avoided.
While we clearly need to break apart the repo, this suggestion needs community more discussion!
The Crowbar community has a tradition of “day zero ops” community support for the latest OpenStack release at the summit using our pull-from-source capability. This release we’ve really gone the extra mile by doing it one THREE Linux distros (Ubuntu, RHEL & SLES) in parallel with a significant number of projects and new capabilities included.
I’m especially excited about Crowbar implementation of Havana Docker support which required advanced configuration with Nova and Glance. The community also added Heat and Celiometer in the last release cycle plus High Availability (“Titanium”) deployment work is in active development. Did I mention that Crowbar is rocking OpenStack deployments? No, because it’s redundant to mention that. We’ll upload ISOs of this work for easy access later in the week.
While my team at Dell remains a significant contributor to this work, I’m proud to point out to SUSE Cloud leadership and contributions also (including the new Ceph barclamp & integration). Crowbar has become a true multi-party framework!
Want to learn more? If you’re in Hong Kong, we are hosting a Crowbar Developer Community Meetup on Monday, November 4, 2013, 9:00 AM to 12:00 PM (HKT) in the SkyCity Marriott SkyZone Meeting Room. Dell, dotCloud/Docker, SUSE and others will lead a lively technical session to review and discuss the latest updates, advantages and future plans for the Crowbar Operations Platform. You can expect to see some live code demos, and participate in a review of the results of a recent Crowbar 2 hackathon. Confirm your seat here – space is limited! (I expect that we’ll also stream this event using Google Hangout, watch Twitter #Crowbar for the feed)
My team at Dell has a significant presence at the OpenStack Summit in Hong Kong (details about activities including sponsored parties). Be sure to seek out my fellow OpenStack Board Member Joseph George, Dell OpenStack Product Manager Kamesh Pemmaraju and Enstratius/Dell Multi-Cloud Manager Founder George Reese.
Note: The work referenced in this post is about Crowbar v1. We’ve also reached critical milestones with Crowbar v2 and will begin implementing Havana on that platform shortly.
As much as I love talking about Crowbar and OpenStack, it’s even more fun to cheer for other people doing it!
SUSE’s be a great development partner for Crowbar and an active member of the OpenStack community. I’m excited to see them giving a live demo today about their OpenStack technology stack (which includes Crowbar and Ceph).
Register for the Live Demo on Wed 06-26-2013 at 3.00 – 4.00 pm GMT to “learn about SUSE’s OpenStack distribution: SUSE Cloud with Dell Crowbar as the deployment mechanism and advanced features such as Ceph unified storage platform for object, block and file storage in the cloud.”
The presenter, Rick Ashford, lives in Austin and is a regular at the OpenStack Austin Meetups. He has been working with Linux and open-source software since 1998 and currently specializes in the OpenStack cloud platform and the SUSE ecosystem surrounding it.
Whew….Yesterday, Dell announced TWO OpenStack block storage capabilities (Equallogic & Ceph) for our OpenStack Essex Solution (I’m on the Dell OpenStack/Crowbar team) and community edition. The addition of block storage effectively fills the “persistent storage” gap in the solution. I’m quadrupally excited because we now have:
both Nova drivers’ code is in the open at part of our open source Crowbar work
Frankly, I’ve been having trouble sitting on the news until Dell World because both features have been available in Github before the announcement (EQLX and Ceph-Barclamp). Such is the emerging intersection of corporate marketing and open source.
As you may expect, we are delivering them through Crowbar; however, we’ve already had customers pickup the EQLX code and apply it without Crowbar.
The Equallogic+Nova Connector
If you are using Crowbar 1.5 (Essex 2) then you already have the code! Of course, you still need to have the admin information for your SAN – we did not automate the configuration of the storage system, but the Nova Volume integration.
We have it under a split test so you need to do the following to enable the configuration options:
Install OpenStack as normal
Create the Nova proposal
Enter “Raw” Attribute Mode
Change the “volume_type” to “eqlx”
The Equallogic options should be available in the custom attribute editor! (of course, you can edit in raw mode too)
Usage note: the integration uses SSH sessions. It has been performance tested but not been tested at scale.
The Ceph+Nova Connector
The Ceph capability includes a Ceph barclamp! That means that all the work to setup and configure Ceph is done automatically done by Crowbar. Even better, their Nova barclamp (Ceph provides it from their site) will automatically find the Ceph proposal and link the components together!