Operators, they don’t want to swim Upstream

Operators Dinner 11/10

Nov 10, Palo Alto Operators Dinner

Last Tuesday, I had the honor of joining an OpenStack scale operators dinner. Foundation executives, Jonathan Bryce and Lauren Sell, were also on the guest list so talk naturally turned to “how can OpenStack better support operators.” Notably, the session was distinctly not OpenStack bashing.

The conversation was positive, enthusiastic and productive, but one thing was clear: the OpenStack default “we’ll fix it in the upstream” answer does not work for this group of operators.

What is upstreaming?  A sans nuance answer is that OpenStack drives fixes and changes in the next community release (longer description).  The project and community have a tremendous upstream imperative that pervades the culture so deeply that we take it for granted.  Have an issue with OpenStack?  Submit a patch!  Is there any other alternative?

Upstreaming [to trunk] makes perfect sense considering the project vendor structure and governance; however, it is a very frustrating experience for operators.   OpenStack does have robust processes to backport fixes and sustain past releases and documentation; yet, the feeling at the table was that they are not sufficiently operator focused.

Operators want fast, incremental and pragmatic corrections to the code and docs they are deploying (which is often two releases back).  They want it within the community, not from individual vendors.

There are great reasons for focusing on upstream trunk.  It encourages vendors to collaborate and makes it much easier to add and expand the capabilities of the project.  Allowing independent activity on past releases creates a forward integration mess and could make upgrades even harder.  It will create divergence on APIs and implementation choices.

The risk of having a stable, independently sustained release is that operators have less reason to adopt the latest shiny release.  And that is EXACTLY what they are asking for.

Upstreaming is a core value to OpenStack and essential to our collaborative success; however, we need to consider that it is not the right answer to all questions.  Discussions at that dinner reinforced that pushing everything to latest trunk creates a significant barrier for OpenStack operators and users.

What are your experiences?  Is there a way to balance upstreaming with forking?  How can we better serve operators?

More Signal & Less Noise: my OpenStack Tokyo Restrospective

We’re building real business on OpenStack. This seems especially true in Asia where the focus is on using the core not expanding it. At the same time, we’ve entered the “big tent” era where non-core projects are proliferating.

Let’s explore what’s signal and what’s noise … but before we start, here are quick links to my summit videos:


OpenStack summits are really family reunions. While aunt and uncles (Vendors) are busying showing off, all the cousins (Projects) are getting re-acquainted. Like any family it’s fun, competitive, friendly and sometimes dysfunctional.

Signal: Global Users and Providers

There are real deployments of OpenStack and real companies building businesses around the code base. I’ve been pleasantly surprised by the number of people quietly making OpenStack work. Why quietly? It’s still more of a struggle than it should be.

Signal: Demand for Heterogeneous and Interop Environments

There’s no such thing as a mono-lithic cloud. Even within the community, Monty’s Shade API normalizer, is drawing attention. More broadly, everyone is using multiple cloud platforms and the trend accelerates due to container portability.

Signal: Container Workloads

Containers are dominating the cloud discussion for good reasons. They are pushing into OpenStack at the top (Platform), bottom (Deployment) and side (Scheduler). While OpenStack must respond architecturally, it’s not clear yet if it can pivot from virtualization focused to something broader fast enough (Mesos?).

Signal: Ansible

The lightweight DevOps tool seems to be winning the popularity contest. It may not be the answer to all problems, but it’s clearly part of helping solve a lot of them. Warning: Ansible complexity explodes on multi-tiered, scale and upgrade orchestration.

Signal: DefCore and Product Working Group (PWG)

Both efforts have crossed from a concept into decision-making bodies within the community. The work is far from over. DefCore needs users to demand compliance from vendors. Product WG needs developers to demand their management sign on to PWG roadmaps.

Noise: Distro vs Service Argument

There are a lot of ways to consume OpenStack. None of them are wrong but some are more aligned with individual vendor strategies than others. Saying one way to run OpenStack is more right is undermining our overall operability and usability objectives.

Noise: Contributor Metrics

We’ve created a very commit economy and summits are vendors favorite time to brag about their dedication to community via upstreaming. These metrics are incomplete at best and potentially destructive to the health of the project as vendors compete to win the commit race instead of the quality and ecosystem race.

Noise: Big Tent

We’ve officially entered the “big tent” era of OpenStack. This governance change was lead by the Technical Committee to address how we manage projects; however, there are broader user, operator, vendor and ecosystem implications. Unfortunately, even within the community, the platform implications of a loosely governed, highly inclusive community not completely understood.

Overall, I left Tokyo enthusiastic about OpenStack’s future as a platform and community; however, I also see that we have not structured how we mingle platform, community and ecosystem. This is especially true because OpenStack is just a part in the much broader cloud market and work outside OpenStack is continue to disrupt our plans. As a Board member, I’ll hoping to start a discussion about this and want to hear your opinions.

To avoid echo chamber, OpenStack must embrace competitive cloud ecosystem

Japanese Bullet Train View

I was in Japan before the Tokyo summit on a bullet train to Kyoto watching the mix of heavy industry and bucolic mountains pass by. That scene reflects an OpenStack duality: we want to be both a dominant platform delivering core cloud services and an open source values driven collective.

First, I fundamentally believe in the success of OpenStack as the open virtual infrastructure management platform.

I believe that we have solved the virtual compute/storage/network problem sufficiently to become the de facto open IaaS platform. While not perfect, the technologies are sufficient assuming we continue to improve ease of use and operational hardening. Pursing that base capability is my primary motivation for DefCore work.

I don’t believe that the OpenStack community is, or should try to become, the authority on “all things cloud.”

In the presence of Amazon, VMware, Microsoft and Google, we cannot make that claim with any degree of self-respect. Even newcomers like DigitalOcean have an undeniable footprint and influence. Those vendor platforms drive cloud ecosystems and technologies which foster fast innovation because there is no friction to joining their ecosystems and they are sufficiently large and stable enough to represent a target market. We’ve seen clear signs from Rackspace, HP and others that platform diversity improves cloud strength.

I continue to think we (OpenStack) spend too much time evaluating what is “in” or “out” of the project and too little time talking about what’s “on,” “under” and “with” the project like Kubernetes, Mesos, Docker, SDN, Hadoop and Ceph. That type of thinking creates distance between OpenStack efforts and the majority of the market.

What motivates the drive to an all open captive community? It’s the reasonable concern that critical parts of the infrastructure will become pay-to-play. For example, what if a non-OpenStack alternative to Heat Orchestration gained popularity for OpenStack implementers. Perhaps something that ran on Amazon also. That would create external pressure that would drive internal priorities. These “non-OpenStack” products would then have influence without having to contribute back to upstream.

Can we afford to have external entities driving internal priorities? Hell yes, that’s what customer adoption looks like.

OpenStack does not own the market sufficiently to create cloud echo chamber. The next wave of cloud innovation (my money is on container platforms) will follow the path of least resistance and widest adoption. We need to embrace that these innovations will not all be inside our community so that we can welcome them as part of our ecosystem. The community needs to find peace with that.

How do platforms die? One step at a time [the Fidelity Gap]

The RackN team is working on the “Start to Scale” position for Digital Rebar that targets the IT industry-wide “fidelity gap” problem.  When we started on the Digital Rebar journey back in 2011 with Crowbar, we focused on “last mile” problems in metal and operations.  Only in the last few months did we recognize the importance of automating smaller “first mile” desktop and lab environments.

A fidelityFidelity Gap gap is created when work done on one platform, a developer laptop, does not translate faithfully to the next platform, a QA lab.   Since there are gaps at each stage of deployment, we end up with the ops staircase of despair.

These gaps hide defects until they are expensive to fix and make it hard to share improvements.  Even worse, they keep teams from collaborating.

With everyone trying out Container Orchestration platforms like Kubernetes, Docker Swarm, Mesosphere or Cloud Foundry (all of which we deploy, btw), it’s important that we can gracefully scale operational best practices.

For companies implementing containers, it’s not just about turning their apps into microservice-enabled immutable-rock stars: they also need to figure out how to implement the underlying platforms at scale.

My example of fidelity gap harm is OpenStack’s “all in one, single node” DevStack.  There is no useful single system OpenStack deployment; however, that is the primary system for developers and automated testing.  This design hides production defects and usability issues from developers.  These are issues that would be exposed quickly if the community required multi-instance development.  Even worse, it keeps developers from dealing with operational consequences of their decisions.

What are we doing about fidelity gaps?  We’ve made it possible to run and faithfully provision multi-node systems in Digital Rebar on a relatively light system (16 Gb RAM, 4 cores) using VMs or containers.  That system can then be fully automated with Ansible, Chef, Puppet and Salt.  Because of our abstractions, if deployment works in Digital Rebar then it can scale up to 100s of physical nodes.

My take away?  If you want to get to scale, start with the end in mind.

Deploy to Metal? No sweat with RackN new Ansible Dynamic Inventory API

Content originally posted by Ansibile & RackN so I added a video demo.  Also, see Ansible’s original post for more details about the multi-vendor “Simple OpenStack Initiative.”

The RackN team takes our already super easy Ansible integration to a new level with added SSH Key control and dynamic inventory with the recent OpenCrowbar v2.3 (Drill) release.  These two items make full metal control more accessible than ever for Ansible users.

The platform offers full key management.  You can add keys at the system. deployment (group of machines) and machine levels.  These keys are operator settable and can be added and removed after provisioning has been completed.  If you want to control access to groups on a servers or group of server basis, OpenCrowbar provides that control via our API, CLI and UI.

We also provide a API path for Ansible dynamic inventory.  Using the simple Python client script (reference example), you can instantly a complete upgraded node inventory of your system.  The inventory data includes items like number of disks, cpus and amount of RAM.  If you’ve grouped machines in OpenCrowbar, those groups are passed to Ansible.  Even better, the metadata schema includes the networking configuration and machine status.

With no added configuration, you can immediately use Ansible as your multi-server CLI for ad hoc actions and installation using playbooks.

Of course, the OpenCrowbar tools are also available if you need remote power control or want a quick reimage of the system.

RackN respects that data centers are heterogenous.  Our vision is that your choice of hardware, operating system and network topology should not break devops deployments!  That’s why we work hard to provide useful abstracted information.  We want to work with you to help make sure that OpenCrowbar provides the right details to create best practice installations.

For working with bare metal, there’s no simpler way to deliver consistent repeatable results

DefCore Update – slowly taming the Interop hydra.

Last month, the OpenStack board charged the DefCore committee to tighten the specification. That means adding more required capabilities to the guidelines and reducing the number of exceptions (“flags”).  Read the official report by Chris Hoge.

Cartography by Dave McAlister is licensed under a. Creative Commons Attribution 4.0 International License.

It turns out interoperability is really, really hard in heterogenous environments because it’s not just about API – implementation choices change behavior.

I see this in both the cloud and physical layers. Since OpenStack is setup as a multi-vendor and multi-implementation (private/public) ecosystem, getting us back to a shared least common denominator is a monumental challenge. I also see a similar legacy in physical ops with OpenCrowbar where each environment is a snowflake and operators constantly reinvent the same tooling instead of sharing expertise.

Lack of commonality means the industry wastes significant effort recreating operational knowledge for marginal return. Increasing interop means reducing variations which, in turn, increases the stakes for vendors seeking differentiation.

We’ve been working on DefCore for years so that we could get to this point. Our first real Guideline, 2015.03, was an intentionally low bar with nearly half of the expected tests flagged as non-required. While the latest guidelines do not add new capabilities, they substantially reduce the number of exceptions granted. Further, we are in process of adding networking capabilities for the planned 2016.01 guideline (ready for community review at the Tokyo summit).

Even though these changes take a long time to become fully required for vendors, we can start testing interoperability of clouds using them immediately.

While, the DefCore guidelines via Foundation licensing policy does have teeth, vendors can take up to three years [1] to comply. That may sounds slow, but the real authority of the program comes from customer and vendor participation not enforcement [2].

For that reason, I’m proud that DefCore has become a truly diverse and broad initiative.

I’m further delighted by the leadership demonstrated by Egle Sigler, my co-chair, and Chris Hoge, the Foundation staff leading DefCore implementation.  Happily, their enthusiasm is also shared by many other people with long term DefCore investments including mid-cycle attendees Mark Volker (VMware), Catherine Deip (IBM) who is also a RefStack PTL, Shamail Tahir (EMC), Carol Barrett (Intel), Rocky Grober (Huawei), Van Lindberg (Rackspace), Mark Atwood (HP), Todd Moore (IBM), Vince Brunssen (IBM). We also had four DefCore related project PTLs join our mid-cycle: Kyle Mestery (Neutron), Nikhil Komawar (Glance),  John Dickinson (Swift), and Matthew Treinish (Tempest).

Thank you all for helping keep DefCore rolling and working together to tame the interoperability hydra!

[1] On the current schedule – changes will now take 1 year to become required – vendors have a three year tail! Three years? Since the last two Guideline are active, the fastest networking capabilities will be a required option is after 2016.01 is superseded in January 2017. Vendors who (re)license just before that can use the mark for 12 months (until January 2018!)

[2] How can we make this faster? Simple, consumers need to demand that their vendor pass the latest guidelines. DefCore provides Guidelines, but consumers checkbooks are the real power in the ecosystem.

Curious about SDN & OpenStack? We discuss at Open Networking Summit Panel (next Thursday)

Next Thursday (6/18), I’m on a panel at the SJC Open Networking Summit with John Zannos (Canonical), Mark Carroll (HP), Mark McClain (VMware).  Our topic is software defined networking (SDN) and OpenStack which could go anywhere in discussion.
OpenStack is clearly driving a lot of open innovation around SDN (and NFV).
I have no idea of what other’s want to bring in, but I was so excited about the questions that I suggested that I thought to just post them with my answers here as a teaser.

1) Does OpenStack require an SDN to be successful?

Historically, no.  There were two networking modes.  In the future, expect that some level of SDN will be required via the Neutron part of the project.

More broadly, SDN appears to be a critical component to broader OpenStack success.  Getting it right creates a lock-in for OpenStack.

2) If you have an SDN for OpenStack, does it need to integrate with your whole datacenter or can it be an island around OpenStack?

On the surface, you can create an Island and get away with it.  More broadly, I think that SDN is most interesting if it provides network isolation throughout your data center or your hosting provider’s data center.  You may not run everything on top of OpenStack but you will be connecting everything together with networking.

SDN has the potential to be the common glue.

3) Of the SDN approaches, which ones seem to be working?  Why?

Overall, the overlay networking approaches seem to be leading.  Anything that requires central control and administration will have to demonstrate it can scale.  Anything that actually requires re-configuring the underlay networking quickly is also going to have to make a lot of progress.

Networking is already distributed.  Anything that breaks that design pattern has an uphill battle.

4) Are SDN and NFV co-dependent?  Are they driving each other?

Yes.  The idea of spreading networking functions throughout your data center to manage east-west or individual tenant requirements (my definition of NFV) requires a way to have isolated traffic (one of the uses for SDN).

5) Is SDN relevant outside of OpenStack?  If so, in what?

Yes.  SDN on containers will become increasingly important.  Also, SDN termination to multi-user systems (like a big database) also make sense.

6) IPv6?  A threat or assistance to SDN?

IPv6 is coming, really.  I think that IPv6 has isolation and encryption capabilities that compete with SDN as an overlay.  Widespread IPv6 adoption could make SDN less relevant.  It also does a better job for multi-cloud networking since it’s neutral and you don’t have to worry about which SDN tech your host is using.