Cloudera Manager Barclamp posted! (part of updated Dell | Cloudera Apache Hadoop Solution)

My team at Dell has been driving to transparency and openness around Crowbar plus our OpenStack and Hadoop powered solutions.  Specifically, our work for our coming release is maintained in the open on the Dell CloudEdge Github site.  You can see (and participate in!) our development and validation work in advance of our official release.

I’m pleased to note that our Cloudera Manager barclamp has been posted to Github!

This barclamp supersedes  the Hadoop barclamp in the next release of the Dell | Cloudera Apache Hadoop solution.  You can built it in Crowbar using the “cloudera-os-build”  branch for Crowbar.  Do not fear!  The Hadoop barclamp still exists (hadoop-os-build branch).

Both the new and original Hadoop barclamp use the Cloudera Hadoop distribution (aka CDH); however, the new barclamp is able to leverage Cloudera‘s latest management capabilities.  For the Dell solution, Cloudera Manager has always been part of the offering.  The primary difference is that we are improving the level of integration.  I promise to post more about the features of the solution as we get closer to release.

Barclamps: now with added portability!

I had a question about moving barclamps between solutions.  Since Victor just changed the barclamp build to create a tar for each barclamp (with the debs/rpms), I thought it was the perfect time to explain the new feature.

You can find the barclamps on the Crowbar ISO under “/dell/barclamps” and you can install the TAR onto a Crowbar system using “./barclamp_install foo.tar.gz” where foo is the name of your barclamp.

Here’s a video of how to find and install barclamp tars:

Note: while you can install OpenStack into a Hadoop system, that combination is NOT tested.  We only test OpenStack on Ubuntu 10.10 and Hadoop on RHEL 5.7.   Community help in expanding support is always welcome!

Crowbar community support and 111111 sprint plan

The Dell Crowbar team is working to improve road map transparency. In the last few weeks, the Crowbar community has become more active on our lists, testing builds, and helping with documentation.

We love the engagement and continue to make supporting the list a priority.

Participation in Crowbar, OpenStack and Hadoop has been exceeding our expectations and we’re working to implement more community support and process. Thank you!!!

Our next steps:

  1. I’ve committed to post sprint plans and summary pages (this is the first)
  2. New Crowbar Twitter account
  3. I’m going to setup feature voting on the Crowbar Facebook page (like to vote)
  4. Continue to work the listserv and videos. We need help converting those to documentation on the crowbar wiki.
  5. Formalize collaborator agreements – we’re working with legal on this
  6. Exploring the option of a barclamp certification program and Crowbar support
  7. Moving to a gated trunk model for internal commits to improve quality
  8. Implementing a continuous integration system that includes core and barclamps. This will be part of our open source components.

We are working towards the 1.2 release (Beta 1) . That release is focused on supporting OpenStack but includes enhancements for upgrades, Hadoop, and additional OS support.

Our Sprint 111111 plan.

Source: Crowbar Wiki: [[sprint 111111]]

  • Theme: OpenStack Diablo Final release candidate.
  • Core Work: Refine Deployment for Nova, Glance, Nova Dashboard (horizon), keystone, swift
  • New additions: mySQL barclamp, Nova HA networking, kong
  • Crowbar internals: expose error states for proposals, allow packages to be included with barclamps to make upgrades easier, barclamp group pages
  • Operating system: added CentOS
  • Documentation: we’ve split the user guides into distinct books so Crowbar, OpenStack, and Hadoop each have their own user guide.
  • Pending action: expose the Hadoop barclamps
  • OS note: OpenStack is being tested (at Dell) against Ubuntu 10.10 only. Hadoop was tested against RHEL 5.7 and we expect it to work against CentOS also.

Dell is open sourcing Crowbar Apache Hadoop barclamps!

I’m very excited to announce that my team at Dell will be open sourcing our Apache Hadoop Crowbar barclamps by the end of the month.

This release raises the bar on open Hadoop deployments by making them faster, scalable, more integrated and repeatable.

These barclamps were developed in conjunction with our licensed Dell | Cloudera Solution. The licensed solution is for customers seeking large scale and professionally supported big data solutions. The purpose of the open barclamps (which pull the open source parts from the Cloudera distro) is to help you get started with Hadoop and reduce your learning curve. Our team invested significant testing effort in ensuring that these barclamps work smoothly because they are the foundational layer of our for-pay Hadoop solution.

Included in the Hadoop barclamp suite are Hadoop Map Reduce, Hive, Pig, ZooKeeper and Sqoop running on RHEL 5.7. These barclamps cover the core parts of the Hadoop suite. Like other Crowbar deployments (see OpenStack), the barclamps automatically discover the service configurations and interoperate. One of our team members (call him Scott Jensen) said it very simply “I can deploy a fully an integrated Hadoop cluster in a few hours. That friggin’ rocks!” I just can’t put it more eloquently than that!

I’ll post again when we flip the “open” bit and invite our community to dig in and help us continue to set the standards on open Hadoop deployments.

For more perspectives on this release, check out posts by Barton George (just for devs), Joseph George (About Hadoop) and Aurelian Dumitru

Barton posted these two videos of me talking about the release too:

Hadoop & Crowbar:

Dev’s Only Short:

Crowbar modularized: latest changes that make clouds even easier to create, update, and maintain

In the last week, my team at Dell completed a major refactoring of Crowbar that significantly improves our ability to bring in community contributions and field customizations.  Today, we merged it into Crowbar’s public repo(s).

From the very first versions, our objective for Crowbar was to create the fastest and most reliable cloud deployments. Along the way, we realized Crowbar’s true potential lay in embracing DevOps as an operational model for maintaining clouds. That meant building up cloud deployments in layers from pieces that we call barclamps (extensions of Chef cookbooks). Our first version, centered on OpenStack Cactus, leveraged barclamps but was still created as a single system. This unified system was a huge step forward in cloud deployments, but did not live up to our CloudOps vision of continuous delivery.

In this version, each Crowbar barclamp is an independent delivery unit that can be integrated before, while or after installing Crowbar.

The core of the change is each barclamp, including the most core ones, are stored in independent code repositories. Putting the code into distinct repos means that each barclamp can have its own life cycle, its own maintainer site and its own dependency tree. This modularization allows customers to manage their Crowbar deployments with a very fine brush: they may choose to customize parts of the system, they could lock components to specific tag and they can bring in barclamps from other vendors.

While the core barclamps are automatically integrated into the Crowbar build using git submodules; other barclamps are installed into the system as needed. This allows you to pull in the suite of OpenStack barclamps at build time or to wait until your Crowbar system is running before installing. Once you install a barclamp, you are able to retrieve an updated barclamp and reapply it to the system.

This feature gives you the ability to 1) choose exactly what you want to include and 2) perform field updates to a live Crowbar system.

Let’s look at some examples:

  1. The Cloud Foundry barclamp can be sourced Cloud Foundry instead of bundled into the Crowbar repository. This allows the team working on the cloud application to take ownership for their own deployment. As a continuous delivery proponent, I believe strongly that the development team should be responsible for ensuring that their code is deployable (refer to my OpenStack “Deployer API” blue print attempting to codify this).
  2. DreamHost, maintainers of Ceph Storage, can maintain their own local barclamp repos for OpenStack that are cloned from our community Swift barclamp. This allows them to innovate and customize OpenStack deployments for their business and choose which updates to merge back to the community.
  3. Rackspace Cloud Builders can work on the most leading edge OpenStack features and maintaining workable deployments on branches. As the code stabilizes, they simply merge in their changes.
  4. Dell BIOS and RAID barclamps only support the PowerEdge C line today. When we offer PowerEdge R support, you will be able to install or update the barclamps to add that capability. If another hardware vendor creates a barclamp for their hardware then you can install that into your existing system.

I believe that these changes to Crowbar are a huge step forwards on our journey of creating a community supportable Open Operations framework. I hope that you are as excited as I am about these changes.

I encourage you to take the first step by trying out Crowbar and, ultimately, writing your own barclamps.

Post Scripts:

  • In addition to the modularization, the updated code includes RHEL as a deployment platform. At present, you must choose to be either RHEL or Ubuntu at build time.
  • We have enhanced the network barclamp to describe connections as more abstract connections, called conduits, between nodes. This is a powerful change, but requires some understanding before you start making changes.
  • We have only begun testing the change as of 9/12, we expect the system to be fully stabilized by 10/3. If you are not willing to deal with bugs then I recommend building the Crowbar “v1.0″ tag (or using the ISOs from our July launch).

Technical details of pending Crowbar changes

We’re testing a HUGE batch of changes to Crowbar before we commit them. The changes support the barclamp modularization work and also include the addition of RHEL and network barclamp update.

You may be eager to dig in; however, disruptiveness of these changes means that we are taking extra time to make sure that the build and install still work.

Here’s what you’ll see when we commit the changes:

  • Changes in naming to be more generic
    • Crowbar server user/pass is now crowbar/crowbar (was openstack/openstack)
    • Rails app path now crowbar_framework (was openstack_manager )
  • The pre-split barclamps (/change-image/dell/barclamps/*) have been moved into individual github repos (barclamp-*).
    • Barclamps are pulled into the build using “git submodule”
    • Chef scripts for barclamps are no longer copied and comingled together in the chef directory. They remain in their source directories (default /opt/dell/barclamps)
  • Inside the barclamps, you’ll find
    • A crowbar configuration file to direct the barclamp installer including localization and menu extensions.
    • Path changes to better align with the destination paths (command_line -> bin, app ->crowbar_framework)
    • App views moved under subdirectories
  • Changes to installation scripts
    • Barclamp installation changed to a ruby library so it can do more and be used individually outside of the install process. This allows barclamps to be imported or updated after installation.
    • Changes to create accommodate multiple operating systems
  • Addition of a “redhat-5.6-extra” directory with the RHEL 5.6 installation build components.
    • The RHEL version installs Opcode Chef Server 0.10 (Ubuntu is still 0.9 – community help here?)
  • Crowbar framework Rails app runs under Rainbow instead of Apache.
  • The code for the framework and the barclamp installer has been moved into the crowbar barclamp.
    • The installer bootstraps the crowbar barclamp to install itself.
  • The network barclamp has been substantially changed – that will require additional documentation. Features include
    • Concept of “conduits” that are constructed on nodes to be shared between barclamps
    • Ability to map adapters in a general way to deal with inconsistent enumeration
    • Mapping conduits to adapters allows for new teaming and multiple teaming configurations

We’ll post to the Crowbar listserv when changes. They will be posted to Crowbar HEAD. If you want the current build, we have created a “v1.0″ tag.