OpenStack leaders learning by humility, doing and being good partners

With the next OpenStack Board meeting on Thursday (5/30/13 agenda) and Mark McLoughlin’s notes crossing my desk, I was reminded of still open discussion topics around OpenStack leadership.  Reminder: except for executive sessions, OpenStack Board Meetings are open (check agenda for details).

2013-03-11_20-01-50_458

Many of the people and companies involved in OpenStack are new to open source projects. Before OpenStack, I had no direct experience building a community like we’ve built together with OpenStack or I’ve been leading with Crowbar. There is no Collaborative Open Source Communities for Dummies book (I looked).

I am not holding myself, OpenStack or Crowbar up as shining examples of open source perfection. Just the opposite, we’ve had to learn the hard way about what works and what fails. I attribute our successes to humility to accept feedback and willingness to ask for help.

But being successful in the small (like during OpenStack Cactus) is different than where we are heading.  In the small, everyone was an open source enthusiast and eager collaborator.  In the large, we should be asking the question “how will we teach people to join and build an open source community?”

The answer is that collaboration must be modeled by the OpenStack leadership.

At the Summit, I was talking with fellow board director Sean Roberts (Yahoo!) and I think he made this point very simply:

“Being in open source is a partnership. If you don’t bring something to the partnership then you’re a user not a partner. We love users but we need to acknowledge the difference.” (Sean Roberts, OpenStack Director)

OpenStack will succeed by building a large base of users; consequently, we need our leaders to be partners in the community.

Connecting the dots: Dell stays course on OpenStack private

rob pdx drivingWhen it comes to OpenStack, I don’t just work for Dell: I’m the technical lead for our OpenStack-powered private Cloud Solution and an elected director to the OpenStack Foundation board.

Frankly, the announcement of our change in public cloud strategy overshadowed our increasing level of investment in OpenStack-powered private cloud solutions (we are hiring!).  Sam Greenblatt, Dell Product Group VP and Chief Architect, is very specific that the recent announcements are about increasing investment where Dell is already successful plus accelerating with new features (such as leadership in HyperV enablement).

The fact that we focused on our decision to pivot away from Dell hosted public cloud distracted from the strategic choices that we’ve been making.  In the lean process that we use, pivots are a positive sign of listening and self-honesty.  Sadly, that distraction led to confusion, misleading comments, and implications that Dell was dropping OpenStack or questioning OpenStack sustainability and market success.

For the record, Dell was one of the first companies to support OpenStack with supporting quotes from Forrest Norrod (Dell GM for Servers and my direct boss) way back  in July 2010.  Our private OpenStack based cloud, built on open source Crowbar, was the first to market 2 years ago (deploying Cactus!).  We’ve been investing steadily in both fundamental improvements to OpenStack deployment and being early supporting the Grizzly release.

I am not implying that OpenStack’s future is certain (we have a lot of work to do) or that Dell OpenStack strategy will not change again; however, I know first-hand that both are on much firmer footing than some reports have implied.

Crowbar cuts OpenStack Grizzly (“pebbles”) branch & seeks community testing

Pebbles CutThe Crowbar team (I work for Dell) continues to drive towards “zero day” deployment readiness. Our Hadoop deployments are tracking Dell | Cloudera Hadoop-powered releases within a month and our OpenStack releases harden within three months.

During the OpenStack summit, we cut our Grizzly branch (aka “pebbles”) and switched over to the release packages. Just a reminder, we basically skipped Folsom. While we’re still tuning out issues on OpenStack Networking (OVS+GRE) setup, we’re also looking for community to start testing and tuning the Chef deployment recipes.

We’re just sprints from release; consequently, it’s time for the Crowbar/OpenStack community to come and play! You can learn Grizzly and help tune the open source Ops scripts.

While the Crowbar team has been generating a lot of noise around our Crowbar 2.0 work, we have not neglected progress on OpenStack Grizzly.  We’ve been building Grizzly deploys on the 1.x code base using pull-from-source to ensure that we’d be ready for the release. For continuity, these same cookbooks will be the foundation of our CB2 deployment.

Features of Crowbar’s OpenStack Grizzly Deployments

  • We’ve had Nova Compute, Glance Image, Keystone Identity, Horizon Dashboard, Swift Object and Tempest for a long time. Those, of course, have been updated to Grizzly.
  • Added Block Storage
    • importable Ceph Barclamp & OpenStack Block Plug-in
    • Equalogic OpenStack Block Plug-in
  • Added Quantum OpenStack Network Barclamp
    • Uses OVS + GRE for deployment
  • 10 GB networking configuration
  • Rabbit MQ as its own barclamp
  • Swift Object Barclamps made a lot of progress in Folsom that translates to Grizzly
    • Apache Web Service
    • Rack awareness
    • HA configuration
    • Distribution Report
  • “Under the covers” improvements for Crowbar 1.x
    • Substantial improvements in how we configure host networking
    • Numerous bug fixes and tweaks
  • Pull from Source via the Git barclamp
    • Grizzly branch was switched to use Ubuntu & SUSE packages

We’ve made substantial progress, but there are still gaps. We do not have upgrade paths from Essex or Folsom. While we’ve been adding fault-tolerance features, full automatic HA deployments are not included.

Please build your own Crowbar ISO or check our new SoureForge download site then join the Crowbar List and IRC to collaborate with us on OpenStack (or Hadoop or Crowbar 2). Together, we will make this awesome.

Thanks! I’m enjoying my conversation with you

I write because I love to tell stories and to think about how actions we take today will impact tomorrow.  Ultimately, everything here is about a dialog with you because you are my sounding board and my critic.  I appreciate when people engage me about posts here and extend the conversation into other dimensions.  Feel free to call me on points and question my position – that’s what this is all about.

Thank you for being at part of my blog and joining in.  I’m looking forward to hearing more from you.

During the OpenStack Summit, I got to lead and participate in some excellent presentations and panels.  While my theme for this summit was interoperability, there are many other items discussed.

I hope you enjoy them.

Did one of these topics stand out?  Is there something I missed?  Please let me know!

We need better Gold Member criteria to help building OpenStack culture

bunny slippersDuring last OpenStack board meeting, we started a dialog that will be continued over the rest of the year.  It concerns how/if we should apply our criteria to measure the contributions of companies that are applying to become Gold members.

I believe that we should see many contribution “footprints” for companies in Foundation leadership positions.  These footprints do not have to be code in github: there are many visible ways to contribute to OpenStack including internal installs, delivered product, community meetups, open source support around code, service to the community through speaking and sponsoring and, of course, code too.

At this point in the OpenStack evolution, there is so much going on that it is easy to leave footprints because there are so many ways to engage.  Footprints are tangible evidence of community leadership and the currency of collaboration.  OpenStack thrives because we are committed to working together, being transparent in our actions and providing service to the project beyond our own needs.

I believe OpenStack Foundation’s new gold members will are great additions to our growing community; however, we need to be increasingly deliberate in accepting new Gold members to make sure that they have a history of demonstrating a culture of open source leadership and contribution.  

These applications deserve careful consideration for several reasons:

  1. there are a limited number of gold level positions (16 of the 24 are now occupied)
  2. there is no practical way to remove a gold member (but only 8 are elected to the board)
  3. there is a perception (by the applicants) that they gain additional credibility through gold membership
  4. gold and platinum members are the leaders of our community so everyone will models their behavior

It is important to remember that there is no limit or barrier (beyond $) to joining at the corporate sponsors level. So, being a gold member means that companies are seeking a broader leadership role in the project.

Over the next months, Simon Anderson (committee chair, Dreamhost) will be leading me and several other board members in an effort to refine of our Gold member review criteria.  I’ll post own list shortly and I’m interested in hearing from you about what type of “footprints” we should be considered in this process.

OpenStack steps toward Interopability with Temptest, RAs & RefStack.org

Pipes are interoperableI’m a cautious supporter of OpenStack leading with implementation (over API specification); however, it clearly has risks. OpenStack has the benefit of many live sites operating at significant scale. The short term cost is that those sites were not fully interoperable (progress is being made!). Even if they were, we are lack the means to validate that they are.

The interoperability challenge was a major theme of the Havana Summit in Portland last week (panel I moderated) .  Solving it creates significant benefits for the OpenStack community.  These benefits have significant financial opportunities for the OpenStack ecosystem.

This is a journey that we are on together – it’s not a deliverable from a single company or a release that we will complete and move on.

There were several themes that Monty and I presented during Heat for Reference Architectures (slides).  It’s pretty obvious that interop is valuable (I discuss why you should care in this earlier post) and running a cloud means dealing with hardware, software and ops in equal measures.  We also identified lots of important items like Open OperationsUpstreamingReference Architecture/Implementation and Testing.

During the session, I think we did a good job stating how we can use Heat for an RA to make incremental steps.   and I had a session about upgrade (slides).

Even with all this progress, Testing for interoperability was one of the largest gaps.

The challenge is not if we should test, but how to create a set of tests that everyone will accept as adequate.  Approach that goal with standardization or specification objective is likely an impossible challenge.

Joshua McKenty & Monty Taylor found a starting point for interoperability FITS testing: “let’s use the Tempest tests we’ve got.”

We should question the assumption that faithful implementation test specifications (FITS) for interoperability are only useful with a matching specification and significant API coverage.  Any level of coverage provides useful information and, more importantly, visibility accelerates contributions to the test base.

I can speak from experience that this approach has merit.  The Crowbar team at Dell has been including OpenStack Tempest as part of our reference deployment since Essex and it runs as part of our automated test infrastructure against every build.  This process does not catch every issue, but passing Tempest is a very good indication that you’ve got the a workable OpenStack deployment.

Crowbar and our Pivot (or, how we slipped and shipped Grizzly)

Crowbar Grizzly PostMy team at Dell uses Lean process because it forces us to be honest about making hard choices. Our recent decision to pivot back to Crowbar 1.x for the OpenStack Grizzly release is a great example how the pivot process works.

4/24 note: I have a longer post and ISO for Grizzly on Crowbar waiting until we enter QA. The Crowbar community is already very active around this work and you’re encouraged to join.

Like any refactor, there was schedule risk when we started the Crowbar 2.x release. To mitigate this risk, we made two critical choices. First, we choose to advance the OpenStack barclamps on the 1.x code base in parallel with the 2.x work. Second, we chose a pivot date for the team to choose releasing Grizzly on the 1.x or 2.x trunks.

Choosing to jump back to 1.x was one of the hardest choices I’ve made in my career. I’m proud that we had the foresight to keep that as an option and prouder that our team rallied to make it happen.

I acknowledge that 1.x has gaps; however, getting Grizzly into the field for PoCs and pilots with 1.x provide substantial benefits to the community.  That said, there are barclamps for HA deployments and other production features that are under development on the 1.x branch and will be available in the community.

The 2.x code base provides important features but we are building from on the 1.x deployment recipes. This means that development, testing and tuning applied to the Grizzly barclamps will translates directly into Crowbar 2.x field readiness. In fact, more completeness on OpenStack can dramatically simplify Crowbar 2.x testing efforts.  This is especially true on the OpenStack Networking (fka Quantum) barclamps because they are new work.

Delivering solutions is a balance between features, timing and field experience.  The Crowbar team’s preference is to collaborate with operators in the field and that means making workable software available quickly.

I hope that you’ll agree with our approach and help us make Grizzly the most deployable OpenStack yet.

As OpenStack enters rapids with Grizzly, watch for strong currents, hidden rocks & eddies.

White Water

Play Boating From Wikipedia

I enjoy kayaking white water rapids – they are exhilarating and demanding. The water accelerates around obstacles and shows its power. You cannot simply ride the current; you must navigate your way around obstacles, stay clear of eddies that pull you back and watch for hidden rocks. The secret to success is to read the current and make small adjustments as you are carried along – resistance is futile.

After the summit, I see the OpenStack with the Grizzly release like water entering the rapids. The quality and capability of the code base continues to improve while the number of players with offerings in the ecosystem is also increasing rapidly. Until now, there was plenty of room to play together; however, as scope, activity and velocity increase there will more inter-vendor interactions.

As a member of the OpenStack board, I have tremendous enthusiasm for what the OpenStack community has accomplished. There have been some really positive accounts of the summit including CSC “OpenStack gains maturity…“, Silicon Angle “OpenStack has reached a Flash Point”, Randy Bias’ “OpenStack is THE Stack”, Wayne Walls “Hallway Track” and much more on the Planet OpenStack aggregator.

In fact, we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.

I have a responsibility to be transparent and honest about challenges facing the us because it’s the Foundation’s job to guide us forward. My positions result from many conversations that I had throughout the week of the Summit. They are also the result of my first hand experiences along with my 14 years of cloud experience.

Over the next posts, I’ll explore a number of these topics with the goal of helping navigate a path through the potential turbulence. The simple fact is the OpenStack is growing quickly and that creates challenges:

  1. A growing number of new developers are joining. Since our work surface area is expanding, it’s both easier than ever to participate and harder to navigate where to begin. We need to get ahead of the design cycles.
  2. A growing number of non-devs are participating and bringing important contributions and experience. We must include them in the OpenStack meritocracy because they speak for the quality and usability of the project.
  3. A growing number of companies (many “name brands”) who are still trying to figure out how to participate and collaborate in open source projects. Lack of experience increases the risk of divergence (forking) and market confusion.
  4. A growing number of products based on OpenStack also increases forking risk as OpenStack contributors feel compelled to differentiate.
  5. A growing number of core components (compute+block+network+…) that are required to have base functionality.
  6. A growing number of incubated projects that continue to stress innovation and pace of change that challenges the very question of “what is OpenStack?”
  7. A growing number of deployed sites offering OpenStack clouds but the community lacks a way to verify (or really discuss) compatibility between the sites.

This list is a cause for celebration not a cause for alarm – every item is a challenges based on our success. The community and Foundation are already working to address the risks.

While some of us enjoy the chaos and excitement of rapids, other can take comfort from the fact that they are always followed by calm waters. Don’t worry – we’ll navigate through this together.

“Stack Shop” cover of Macklemore’s Thrift Shop

Sometimes a meme glitters too strongly for me to resist getting pulled in… that happened to great effect that just before the OpenStack Havana summit. When my code-addled mind kept swapping “poppin’ tags” for “OpenStack” on the radio edit, I stopped fighting and rewrote the Thrift Shop lyrics for OpenStack (see below the split).

With a lot of help from summit attendees (many of them are OpenStack celebrities, CEOs, VPs and members OpenStack Foundation board), I was able to create a freaking awesome cover of Macklemore’s second hand confection (NSFW).

Frankly, I don’t know everyone in the video (what, what?)!

But here’s a list of those that I do know.  I’m happy to update so the victims actors get credit.  Singers (in order):

Rob Hirschfeld (me) & Monty Taylor, Peter Poulliot, Judd Maltin, Forrest Norrod, Josh Kleinpeter, Tristan Goode, Dan Bode, Jay Pipes, Prabhakar Gopalan, Peter Chadwick, Simon Andersen, Vish Ishaya, Wayne Walls, Alex Freedland, Niki Acosta, Ops Track Monday Session 1, Ben Cherian, Eric Windisch, Brandon Draeger, Joseph B George,  Mark Collier, Joseph Heck, Tim Bell,  Chris Kemp, Kyle McDonald & Joshua McKenty,

Continue reading

OpenStack’s next hurdle: Interoperability. Why should you care?

SXSW life size Newton's Cradle

SXSW life size Newton’s Cradle

The OpenStack Board spent several hours (yes, hours) discussing interoperability related topics at the last board meeting.  Fundamentally, the community benefits when uses can operate easily across multiple OpenStack deployments (their own and/or public clouds).

Cloud interoperability: the ability to transfer workloads between systems without changes to the deployment operations management infrastructure.

This is NOT hybrid (which I defined as a workload transparently operating in multiple systems); however it is a prereq to achieve scalable hybrid operation.

Interoperability matters because the OpenStack value proposition is all about creating a common platform.  IT World does a good job laying out the problem (note, I work for Dell).  To create sites that can interoperate, we have to some serious lifting:

At the OpenStack Summit, there are multiple chances to engage on this.   I’m moderating a panel about Interop and also sharing a session about the highly related topic of Reference Architectures with Monty Tayor.

The Interop Panel (topic description here) is Tuesday @ 5:20pm.  If you join, you’ll get to see me try to stump our awesome panelists

  • Jonathan LaCour, DreamHost
  • Troy Toman, Rackspace
  • Bernard Golden,  Enstratius
  • Monty Taylor, OpenStack Board (and HP)
  • Peter Pouliot, Microsoft

PS: Oh, and I’m also talking about DevOps Upgrades Patterns during the very first session (see a preview).