Crowbar’s early twins: Cloudera Hadoop & OpenStack Essex

I’m proud to see my team announce the twin arrival of the Dell | Cloudera Apache Hadoop (Manager v4) and Dell OpenStack-Powered Cloud (Essex) solutions.

Not only are we simultaneously releasing both of these solutions, they reflect a significant acceleration in pace of delivery.  Both solutions had beta support for their core technologies (Cloudera 4 & OpenStack Essex) when the components were released and we have dramatically reduced the lag from component RC to solution release compared to past (3.7 & Diablo) milestones.

As before, the core deployment logic of these open source based solutions was developed in the open on Crowbar’s github.  You are invited to download and try these solutions yourself.   For Dell solutions, we include validated reference architectures, hardware configuration extensions for Crowbar, services and support.

The latest versions of Hadoop and OpenStack represent great strides for both solutions.   It’s great to be able have made them more deployable and faster to evaluate and manage.

Crowbar Celebrates 1st Anniversary

Nearly a year ago at OSCON 2011, my team at Dell opened sourced “Crowbar, an OpenStack installer.” That first Github commit was a much more limited project than Crowbar today: there was no separation into barclamps, no distinct network configuration, one operating system option and the default passwords were all “openstack.” We simply did not know if our effort would create any interest.

The response to Crowbar has been exciting and humbling. I most appreciate those who looked at Crowbar and saw more than a bare metal installer. They are the ones who recognized that we are trying to solve a bigger problem: it has been too difficult to cope with change in IT operations.

During this year, we have made many changes. Many have been driven by customer, user and partner feedback while others support Dell product delivery needs. Happily, these inputs are well aligned in intent if not always in timing.

  • Introduction of barclamps as modular components
  • Expansion into multiple applications (most notably OpenStack and Apache Hadoop)
  • Multi-Operating System
  • Working in the open (with public commits)
  • Collaborative License Agreements

Dell‘s understanding of open source and open development has made a similar transformation. Crowbar was originally Apache 2 open sourced because we imagined it becoming part of the OpenStack project. While that ambition has faded, the practical benefits of open collaboration have proven to be substantial.

The results from this first year are compelling:

  • For OpenStack Diablo, coordination with the Rackspace Cloud Builder team enabled Crowbar to include the Keystone and Dashboard projects into Dell’s solution
  • For OpenStack Essex, the community focused work we did for the March Essex Hackday are directly linked to our ability to deliver Dell’s OpenStack-Powered Essex solution over two months earlier than originally planned.
  • For Apache Hadoop distributions for 3.x and 4.x with implementation of Cloudera Manager and eco system components.
  • We’ve amassed hundreds of mail subscribers and Github followers
  • Support for multiple releases of RHEL, Centos & Ubuntu including Ubuntu 12.04 while it was still in beta.
  • SuSE does their own port of Crowbar to SuSE with important advances in Crowbar’s install model (from ISO to package).

We stand on the edge of many exciting transformations for Crowbar’s second year. Based on the amount of change from this year, I’m hesitant to make long term predictions. Yet, just within next few months there are significant plans based on Crowbar 2.0 refactor. We have line of site to changes that expand our tool choices, improve networking, add operating systems and become more even production ops capable.

That’s quite a busy year!

Crowbar deploying Dell | Cloudera 4 | Apache Hadoop

Hopefully you wrote “Cloudera 3.7” in pencil on your to-do list because the Dell Crowbar team has moved to CHD4 & Cloudera Enterprise 4.0. This aligns with the Cloudera GA announcement on Tuesday 6/5 and continues our drive keep Crowbar deployments both fresh and spicy.

With the GA drop, the Crowbar Cloudera Barclamps are effectively at release candidate state (ISO). The Cloudera Barclamps include a freemium version of Cloudera Enterprise 4 that supports up to 50 nodes.

I’m excited about this release because it addresses concerns around fault tolerance, multi-tenant and upgrade.

These tools are solving real world problems ranging from data archival, ad hoc analysis and click stream analysis. We’ve invested a lot of Crowbar development effort in making it fast and easy to build a Hadoop cluster. Now, Cloudera makes it even easier to manage and maintain.

Quick turn OpenStack Essex on Crowbar (BOOM, now we’re at v1.4!)

Don’t blink if you’ve been watching the Crowbar release roadmap!

My team at Dell is about to turn another release of Crowbar. Version 1.3 released 5/14 (focused on Cloudera Apache Hadoop) and our original schedule showed several sprints of work on OpenStack Essex. Upon evaluation, we believe that the current community developed Essex barclamps are ready now.

The healthy state of the OpenStack Essex deployment is a reflection of 1) the quality of Essex and 2) our early community activity in creating deployments based Essex RC1 and Ubuntu Beta1.

We are planning many improvements to our OpenStack Essex and Crowbar Framework; however, most deployments can proceed without these enhancements.  This also enables participants in the 5/31 OpenStack Essex Deploy Day.

By releasing a core stable Essex reference deployment, we are accelerating field deployments and enabling the OpenStack ecosystem. In terms of previous posts, we are eliminating release interlocks to enable more downstream development. Ultimately, we hope that we are also creating a baseline OpenStack deployment.

We are also reducing the pressure to rush more disruptive Crowbar changes (like enabling high availability, adding multiple operating systems, moving to Rails 3, fewer crowbarisms in cookbooks and streamlining networking). With this foundational Essex release behind us (we call it an MVP), we can work on more depth and breadth of capability in OpenStack.

One small challenge, some of the changes that we’d expected to drop have been postponed slightly. Specifically, markdown based documentation (/docs) and some new UI pages (/network/nodes, /nodes/families). All are already in the product under but not wired into the default UI (basically, a split test).

On the bright side, we did manage to expose 10g networking awareness for barclamps; however, we have not yet refactored to barclamps to leverage the change.

Cloudera Manager Barclamp posted! (part of updated Dell | Cloudera Apache Hadoop Solution)

My team at Dell has been driving to transparency and openness around Crowbar plus our OpenStack and Hadoop powered solutions.  Specifically, our work for our coming release is maintained in the open on the Dell CloudEdge Github site.  You can see (and participate in!) our development and validation work in advance of our official release.

I’m pleased to note that our Cloudera Manager barclamp has been posted to Github!

This barclamp supersedes  the Hadoop barclamp in the next release of the Dell | Cloudera Apache Hadoop solution.  You can built it in Crowbar using the “cloudera-os-build”  branch for Crowbar.  Do not fear!  The Hadoop barclamp still exists (hadoop-os-build branch).

Both the new and original Hadoop barclamp use the Cloudera Hadoop distribution (aka CDH); however, the new barclamp is able to leverage Cloudera‘s latest management capabilities.  For the Dell solution, Cloudera Manager has always been part of the offering.  The primary difference is that we are improving the level of integration.  I promise to post more about the features of the solution as we get closer to release.

Work with me! Our Dell team is hiring architects, engineers & open source gurus

If you’ve been watching my team’s progress at Dell on Crowbar, OpenStack and Hadoop and want a front row seat in these exciting open source projects then the ball is in our your court!   We are poised to take all three of these projects into new territories that I cannot reveal here, but, take my word for it, there has never been a better time to join our team.

Let me repeat: my team has a lot of open engineering and marketing positions.

Not only are we doing some really kick ass projects, we are also helping redefine how Dell delivers software.  Dell is investing significantly in building our software capabilities and focus.

Basically, we are looking for engineers with a passion for scale applications, devops and open source.   Experience in Hadoop and/or OpenStack will move you to the top of the pile.   These positions say Hadoop, but we’re also looking for OpenStack, DevOps and Chef.  We think like a start-up.

Ideally in Austin, Boston or the Bay.  We’ll also be happy to hear from you if you’ve got l33t chOps but are not as senior as these positions require.
If you are interested, the BEST NEXT  STEP IS TO APPLY ONLINE.
If you don’t want to click the links, I’m attaching the descriptions of the engineering positions after the split.

Continue reading

Analyze This! Big Data | Apache Hadoop | Dell | Cloudera | Crowbar

This article about Target using buying patterns to expose a teen was pregnant before she told her parents puts big data analysis into everyday terms better than the following 555 words (of course, I recommend that you read both).

Recently, I had the pleasure of being one of our team presenting Dell’s BIG DATA story at an internal conference. From the questions and buzz, it’s clear that the big data is big news this year. My team is at the center of that storm because we are responsible for the Dell | Cloudera Apache™ Hadoop™ solution. The solution is significant because we’ve integrated many pieces necessary to build and sustain a Hadoop cluster: that includes Dell servers, the Cloudera Hadoop distribution, the Crowbar framework and Services to make it useful.

Big Data Analytics spins data straws into information gold.

Before I jump into technical details, it’s worth stating the big data analytics value proposition. The problem is that we are awash in a tsunami of data: we’ve grown beyond the neat rows and columns of application databases, data today include source like website click logs and emails to call records and cash register receipts to including social media tweets and posts. While much of the data is unstructured noise, there is also incredibility valuable information.  (video of my Hadoop “escalator pitch”)

Value is not just hidden inside the bulk data; it lies in correlations between sets of the data.

The big data analytics value proposition is to provide a system to hold a lot of loosely structured information (thus “big data”) and then sift and correlate the information (thus “analytics”). The result is a technology that helps us make data driven decisions. In many applications, the analysis is fed directly back into applications so they can alter behavior in near real-time. For example, an online retail store could offer you purple bunny slippers as you browse for crowbars in the hardware section knowing that you’re reading this post. That is the type of correlations on disparate data that I’m talking about.

This is really two problems: storing a lot of data and then computing over it.

Hadoop, the leading open source big data analytics project, is a suite of applications that implement and extend two core capabilities: a distributed file system (HDFS) and the map-reduce (M-R) algorithm. My point is not to define Hadoop (others have done better and here); instead, I want to highlight that it’s a combination big data analysis is a merger of storage and compute. When learning about any big data analysis solution, you cannot decouple how the data is stored from how the data is analyzed – storage and compute are fundamentally linked.

For that reason, the architecture of a Hadoop cluster is different than either a traditional database or compute cluster. The IO and the resiliency patterns are different. Since Hadoop is a distributed system, hardware redundancy is less important and eliminating IO bottlenecks is paramount. For this reason, our Hadoop clusters use a lot of local, non-RAID drives with a target of delivering a 1:1 CPU core to spindle ratio (ratios are tuned based on planned loads).

Imagine that you are looking for correlations in web click data. To do that analysis, Hadoop need to spend a lot of time cracking open log files, sifting for specific data and then reporting back its results. That process involves thousands of jobs each doing disk IO, CPU & RAM workload and then network transfer; consequently, contention between network and disk demands reduces performance.

Wow… that’s a lot of description and just scratching the surface of Big Data Analytics. I’ll going to have to add the technical details about the Dell solution architecture (Hardware) and software components (Cloudera & Crowbar) in another post.