As OpenStack enters rapids with Grizzly, watch for strong currents, hidden rocks & eddies.

White Water

Play Boating From Wikipedia

I enjoy kayaking white water rapids – they are exhilarating and demanding. The water accelerates around obstacles and shows its power. You cannot simply ride the current; you must navigate your way around obstacles, stay clear of eddies that pull you back and watch for hidden rocks. The secret to success is to read the current and make small adjustments as you are carried along – resistance is futile.

After the summit, I see the OpenStack with the Grizzly release like water entering the rapids. The quality and capability of the code base continues to improve while the number of players with offerings in the ecosystem is also increasing rapidly. Until now, there was plenty of room to play together; however, as scope, activity and velocity increase there will more inter-vendor interactions.

As a member of the OpenStack board, I have tremendous enthusiasm for what the OpenStack community has accomplished. There have been some really positive accounts of the summit including CSC “OpenStack gains maturity…“, Silicon Angle “OpenStack has reached a Flash Point”, Randy Bias’ “OpenStack is THE Stack”, Wayne Walls “Hallway Track” and much more on the Planet OpenStack aggregator.

In fact, we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.

I have a responsibility to be transparent and honest about challenges facing the us because it’s the Foundation’s job to guide us forward. My positions result from many conversations that I had throughout the week of the Summit. They are also the result of my first hand experiences along with my 14 years of cloud experience.

Over the next posts, I’ll explore a number of these topics with the goal of helping navigate a path through the potential turbulence. The simple fact is the OpenStack is growing quickly and that creates challenges:

  1. A growing number of new developers are joining. Since our work surface area is expanding, it’s both easier than ever to participate and harder to navigate where to begin. We need to get ahead of the design cycles.
  2. A growing number of non-devs are participating and bringing important contributions and experience. We must include them in the OpenStack meritocracy because they speak for the quality and usability of the project.
  3. A growing number of companies (many “name brands”) who are still trying to figure out how to participate and collaborate in open source projects. Lack of experience increases the risk of divergence (forking) and market confusion.
  4. A growing number of products based on OpenStack also increases forking risk as OpenStack contributors feel compelled to differentiate.
  5. A growing number of core components (compute+block+network+…) that are required to have base functionality.
  6. A growing number of incubated projects that continue to stress innovation and pace of change that challenges the very question of “what is OpenStack?”
  7. A growing number of deployed sites offering OpenStack clouds but the community lacks a way to verify (or really discuss) compatibility between the sites.

This list is a cause for celebration not a cause for alarm – every item is a challenges based on our success. The community and Foundation are already working to address the risks.

While some of us enjoy the chaos and excitement of rapids, other can take comfort from the fact that they are always followed by calm waters. Don’t worry – we’ll navigate through this together.

“Stack Shop” cover of Macklemore’s Thrift Shop

Sometimes a meme glitters too strongly for me to resist getting pulled in… that happened to great effect that just before the OpenStack Havana summit. When my code-addled mind kept swapping “poppin’ tags” for “OpenStack” on the radio edit, I stopped fighting and rewrote the Thrift Shop lyrics for OpenStack (see below the split).

With a lot of help from summit attendees (many of them are OpenStack celebrities, CEOs, VPs and members OpenStack Foundation board), I was able to create a freaking awesome cover of Macklemore’s second hand confection (NSFW).

Frankly, I don’t know everyone in the video (what, what?)!

But here’s a list of those that I do know.  I’m happy to update so the victims actors get credit.  Singers (in order):

Rob Hirschfeld (me) & Monty Taylor, Peter Poulliot, Judd Maltin, Forrest Norrod, Josh Kleinpeter, Tristan Goode, Dan Bode, Jay Pipes, Prabhakar Gopalan, Peter Chadwick, Simon Andersen, Vish Ishaya, Wayne Walls, Alex Freedland, Niki Acosta, Ops Track Monday Session 1, Ben Cherian, Eric Windisch, Brandon Draeger, Joseph B George,  Mark Collier, Joseph Heck, Tim Bell,  Chris Kemp, Kyle McDonald & Joshua McKenty,

Continue reading

OpenStack’s next hurdle: Interoperability. Why should you care?

SXSW life size Newton's Cradle

SXSW life size Newton’s Cradle

The OpenStack Board spent several hours (yes, hours) discussing interoperability related topics at the last board meeting.  Fundamentally, the community benefits when uses can operate easily across multiple OpenStack deployments (their own and/or public clouds).

Cloud interoperability: the ability to transfer workloads between systems without changes to the deployment operations management infrastructure.

This is NOT hybrid (which I defined as a workload transparently operating in multiple systems); however it is a prereq to achieve scalable hybrid operation.

Interoperability matters because the OpenStack value proposition is all about creating a common platform.  IT World does a good job laying out the problem (note, I work for Dell).  To create sites that can interoperate, we have to some serious lifting:

At the OpenStack Summit, there are multiple chances to engage on this.   I’m moderating a panel about Interop and also sharing a session about the highly related topic of Reference Architectures with Monty Tayor.

The Interop Panel (topic description here) is Tuesday @ 5:20pm.  If you join, you’ll get to see me try to stump our awesome panelists

  • Jonathan LaCour, DreamHost
  • Troy Toman, Rackspace
  • Bernard Golden,  Enstratius
  • Monty Taylor, OpenStack Board (and HP)
  • Peter Pouliot, Microsoft

PS: Oh, and I’m also talking about DevOps Upgrades Patterns during the very first session (see a preview).

DevOps approaches to upgrade: Cube Visualization

I’m working on my OpenStack summit talk about DevOps upgrade patterns and got to a point where there are three major vectors to consider:

  1. Step Size (shown as X axis): do we make upgrades in small frequent steps or queue up changes into larger bundles? Larger steps mean that there are more changes to be accommodated simultaneously.
  2. Change Leader (shown as Y axis): do we upgrade the server or the client first? Regardless of the choice, the followers should be able to handle multiple protocol versions if we are going to have any hope of a reasonable upgrade.
  3. Safeness (shown as Z axis): do the changes preserve the data and productivity of the entity being upgraded? It is simpler to assume to we simply add new components and remove old components; this approach carries significant risks or redundancy requirements.

I’m strongly biased towards continuous deployment because I think it reduces risk and increases agility; however, I laying out all the vertices of the upgrade cube help to visualize where the costs and risks are being added into the traditional upgrade models.

Breaking down each vertex:

  1. Continuous Deploy – core infrastructure is updated on a regular (usually daily or faster) basis
  2. Protocol Driven – like changing to HTML5, the clients are tolerant to multiple protocols and changes take a long time to roll out
  3. Staged Upgrade – tightly coordinate migration between major versions over a short period of time in which all of the components in the system step from one version to the next together.
  4. Rolling Upgrade – system operates a small band of versions simultaneously where the components with the oldest versions are in process of being removed and their capacity replaced with new nodes using the latest versions.
  5. Parallel Operation – two server systems operate and clients choose when to migrate to the latest version.
  6. Protocol Stepping – rollout of clients that support multiple versions and then upgrade the server infrastructure only after all clients have achieved can support both versions.
  7. Forced Client Migration – change the server infrastructure and then force the clients to upgrade before they can reconnect.
  8. Big Bang – you have to shut down all components of the system to upgrade it

This type of visualization helps me identify costs and options. It’s not likely to get much time in the final presentation so I’m hoping to hear in advance if it resonates with others.

PS: like this visualization? check out my “magic 8 cube” for cloud hosting options.

5 things keeping DevOps from playing well with others (Chef, Crowbar and Upstream Patterns)

Sharing can be hardSince my earliest days on the OpenStack project, I’ve wanted to break the cycle on black box operations with open ops. With the rise of community driven DevOps platforms like Opscode Chef and Puppetlabs, we’ve reached a point where it’s both practical and imperative to share operational practices in the form of code and tooling.

Being open and collaborating are not the same thing.

It’s a huge win that we can compare OpenStack cookbooks. The real victory comes when multiple deployments use the same trunk instead of forking.

This has been an objective I’ve helped drive for OpenStack (with Matt Ray) and it has been the Crowbar objective from the start and is the keystone of our Crowbar 2 work.

This has proven to be a formidable challenge for several reasons:

  1. diverging DevOps patterns that can be used between private, public, large, small, and other deployments -> solution: attribute injection pattern is promising
  2. tooling gaps prevent operators from leveraging shared deployments -> solution: this is part of Crowbar’s mission
  3. under investing in community supporting features because they are seen as taking away from getting into production -> solution: need leadership and others with join
  4. drift between target versions creates the need for forking even if the cookbooks are fundamentally the same -> solution: pull from source approaches help create distro independent baselines
  5. missing reference architectures interfere with having a stable baseline to deploy against -> solution: agree to a standard, machine consumable RA format like OpenStack Heat.

Unfortunately, these five challenges are tightly coupled and we have to progress on them simultaneously. The tooling and community requires patterns and RAs.

The good news is that we are making real progress.

Judd Maltin (@newgoliath), a Crowbar team member, has documented the emerging Attribute Injection practice that Crowbar has been leading. That practice has been refined in the open by ATT and Rackspace. It is forming the foundation of the OpenStack cookbooks.

Understanding, discussing and supporting that pattern is an important step toward accelerating open operations. Please engage with us as we make the investments for open operations and help us implement the pattern.

OpenStack Board Voting Starts! Three thoughts about the board and election

1/18 Note: I was re-elected! Thanks everyone for your support.

OpenStack Foundation members, I have three requests for you:

  1. vote (ballots went out by email already and expire on Friday)
  2. vote for me (details below)
  3. pick the board you want, not just the candidates.

And, of course, vote according to our code of conduct: “Members should not attempt to manipulate election results. Open debate is welcome, but vote trading, ballot stuffing and other forms of abuse are not acceptable.”

To vote, you must have been a member for 6 months. Going Mad Panda? Join right now so you can vote next time!

Even if you’re not a voter, you may find my comments useful in understanding OpenStack. Remember, these positions are my own: they do not reflect those of my employer (Dell).

Vote because it’s the strongest voice the community has about OpenStack governance

I’d strongly encourage you to commit code, come to the summit and participate in the lists and chats; however, voting is more. It shows measurable impact. It shows the vitality of the community.

OpenStack is exciting to me because it’s community driven.

I have been a strong and early leader in the community:

Vote for me because of my OpenStack Operations focus

My team at Dell has been laser focused on making OpenStack operable for over 2 years. My team acts like a lean start-up inside of Dell to get products out to market early.

We’ve worked with early operators and ground-breaking ecosystem partners (plus). We’ve helped many people get OpenStack running for real workloads. Consequently, I am highly accessible and accountable to a large number of users, operators and ecosystem vendors. Few other people in the OpenStack community have my breadth of exposure to real OpenStack deployments. My team and I have been dedicated, active participants in OpenStack long before Dell’s grand OpenStack strategy coalesced.

More importantly, I am committed to open source and open operations. The Apache 2 open source Crowbar project established the baseline (and served as the foundation) for other OpenStack deployment efforts. We don’t just talk about open source, we do open source: our team works in the open (my git account).

Vote for a board because diversity of views is important

OpenStack voting allows you to throw up to 8 votes to a single candidate. While you can put all your eggs into a single basket, I recommend considering a broader slate in voting.

We have an impressive and dedicated list of candidates who will do work for the community. I ask that you consider the board as a team. If you want operators, users, diversity, change, less corporate influence or any flavor of the above then think not just about the individual.

Why should I be part of your slate? I am not a blind cheerleader. I have voiced and driven community-focused positions about the board (dev cycle, grizzly >;; folsom, consensus).

One of my core activities has been to work on addressing community concerns about affinity voting. Gold and Platinum members have little incentive to address this issue. While I’d like to see faster action, it requires changing the by-laws. Any by-law changes need to be made carefully and they also require a vote of the electorate. If you want positive change, you need board members, like me, who are persistent and knowledgeable in board dynamics.

What have I done on the Board? Quite a bit…

It would be easy to get lost in a board of 24 members, but I have not. Armed with my previous board experience, I have been a vocal advocate for the community in board meetings without creating disruption or pulling us off topic.

Why re-elect me? For community & continuity

I am a strong and active board member and I work hard to represent all OpenStack constituents: developers, operators, users and ecosystem vendors.

Working on a board is a long-term exercise. Positions and actions I’ve taken today may take months to come to fruition. I would like the opportunity continue that work and see it through.

OpenStack Board Interview Series by Rafael Knuth

Rafael Knuth, Dell Rafael Knuth (@RafaelKnuth with Dell as EMEA Social Media Community Manager) has set an ambitious objective to interview all 24 of the OpenStack Foundation Board members. I had the privilege of being the first interview posted (Japanese version!).

Here’s the series so far:

  1. Rob Hirschfeld (me!) of Dell (my summary)
  2. Boris Renski of Mirantis
  3. Randy Bias of CloudScaling
  4. Jim Curry of Rackspace
  5. Yujie Due of 99cloud
  6. Tim Bell of CERN
  7. Joshua McKenty of Piston
  8. Lew Tucker of Cisco
  9. Alan Clark (Chair) of SUSE  
  10. Joseph George of Dell (added 5/22)

These are excellent interviews and worth reviewing. They give balanced (often critical) views of OpenStack, its progress and challenges.

I update the list as he adds them.

double Block Head with OpenStack+Equallogic & Crowbar+Ceph

Block Head

Whew….Yesterday, Dell announced TWO OpenStack block storage capabilities (Equallogic & Ceph) for our OpenStack Essex Solution (I’m on the Dell OpenStack/Crowbar team) and community edition.  The addition of block storage effectively fills the “persistent storage” gap in the solution.  I’m quadrupally excited because we now have:

  1. both open source (Ceph) and enterprise (Equallogic) choices
  2. both Nova drivers’ code is in the open at part of our open source Crowbar work

Frankly, I’ve been having trouble sitting on the news until Dell World because both features have been available in Github before the announcement (EQLX and Ceph-Barclamp).  Such is the emerging intersection of corporate marketing and open source.

As you may expect, we are delivering them through Crowbar; however, we’ve already had customers pickup the EQLX code and apply it without Crowbar.

The Equallogic+Nova Connector

block-eqlx

If you are using Crowbar 1.5 (Essex 2) then you already have the code!  Of course, you still need to have the admin information for your SAN – we did not automate the configuration of the storage system, but the Nova Volume integration.

We have it under a split test so you need to do the following to enable the configuration options:

  1. Install OpenStack as normal
  2. Create the Nova proposal
  3. Enter “Raw” Attribute Mode
  4. Change the “volume_type” to “eqlx”
  5. Save
  6. The Equallogic options should be available in the custom attribute editor!  (of course, you can edit in raw mode too)

Want Docs?  Got them!  Check out these > EQLX Driver Install Addendum

Usage note: the integration uses SSH sessions.  It has been performance tested but not been tested at scale.

The Ceph+Nova Connector

block-ceph

The Ceph capability includes a Ceph barclamp!  That means that all the work to setup and configure Ceph is done automatically done by Crowbar.  Even better, their Nova barclamp (Ceph provides it from their site) will automatically find the Ceph proposal and link the components together!

Ceph has provided excellent directions and videos to support this install.

My Dilemma with Folsom – why I want to jump to G

When your ship sailsThese views are my own.  Based on 1×1 discussions I’ve had in the OpenStack community, I am not alone.

If you’ve read my blog then you know I am a vocal and active supporter of OpenStack who is seeking re-election to the OpenStack Board.  I’m personally and professionally committed to the project’s success. And, I’m confident that OpenStack’s collaborative community approach is out innovating other clouds.

A vibrant project requires that we reflect honestly: we have an equal measure of challenges: shadow free fall Dev, API vs implementation, forking risk and others.  As someone helping users deploy OpenStack today, I find my self straddling between a solid release (Essex) and a innovative one (Grizzly). Frankly, I’m finding it very difficult to focus on Folsom.

Grizzly excites me and clearly I’m not alone.  Based on pace of development, I believe we saw a significant developer migration during feature freeze free fall.

In Grizzly, both Cinder and Quantum will have progressed to a point where they are ready for mainstream consumption. That means that OpenStack will have achieved the cloud API trifecta of compute-store-network.

  • Cinder will get beyond the “replace Nova Volume” feature set and expands the list of connectors.
  • Quantum will get to parity with Nova Network, addresses overlapping VM IPs and goes beyond L2 with L3 feature enablement like  load balancing aaS.
  • We are having a real dialog about upgrades while the code is still in progress
  • And new projects like Celio and Heat are poised to address real use problems in billing and application formation.

Everything I hear about Folsom deployment is positive with stable code and significant improvements; however, we’re too late to really influence operability at the code level because the Folsom release is done.  This is not a new dilemma.  As operators, we seem to be forever chasing the tail of the release.

The perpetual cycle of implementing deployment after release is futile, exhausting and demoralizing because we finish just in time for the spotlight to shift to the next release.

I don’t want to slow the pace of releases.  In Agile/Lean, we believe that if something is hard then we do should it more.  Instead, I am looking at Grizzly and seeing an opportunity to break the cycle.  I am looking at Folsom and thinking that most people will be OK with Essex for a little longer.

Maybe I’m a dreamer, but if we can close the deployment time gap then we accelerate adoption, innovation and happy hour.  If that means jilting Folsom at the release altar to elope with Grizzly then I can live with that.