Are VMs becoming El Caminos? Containers & Metal provide new choices for DevOps

I released “VMS ARE DEAD” this post two weeks ago on DevOps.com.  My point here is that Ops Automation (aka DevOps) is FINALLY growing beyond Cloud APIs and VMs.  This creates a much richer ecosystem of deployment targets instead of having to shoehorn every workload into the same platform.

In 2010, it looked as if visualization had won. We expected all servers to virtualize workloads and the primary question was which cloud infrastructure manager would dominate. Now in 2015, the picture is not as clear. I’m seeing a trend that threatens the “virtualize all things” battle cry.

IMG_20150301_170558985Really, it’s two intersecting trends: metal is getting cheaper and easier while container orchestration is advancing on rockets. If metal can truck around the heavy stable workloads while containers zip around like sports cars, that leaves VMs as a strange hybrid in the middle.

What’s the middle? It’s the El Camino, that notorious discontinued half car, half pick-up truck.

The explosion of interest in containerized workloads (I know, they’ve been around for a long time but Docker made them sexy somehow) has been creating secondary wave of container orchestration. Five years ago, I called that Platform as a Service (PaaS) but this new generation looks more like a CI/CD pipeline plus DevOps platform than our original PaaS concepts. These emerging pipelines obfuscate the operational environment differently than virtualized infrastructure (let’s call it IaaS). The platforms do not care about servers or application tiers, their semantic is about connecting services together. It’s a different deployment paradigm that’s more about SOA than resource reservation.

On the other side, we’ve been working hard to make physical ops more automated using the same DevOps tool chains. To complicate matters, the physics of silicon has meant that we’ve gone from scale up to scale out. Modern applications are so massive that they are going to exceed any single system so economics drives us to lots and lots of small, inexpensive servers. If you factor in the operational complexity and cost of hypervisors/clouds, an small actual dedicated server is a cost-effective substitute for a comparable virtual machine.

I’ll repeat that: a small dedicated server is a cost-effective substitute for a comparable virtual machine.

I am not speaking against virtualize servers or clouds. They have a critical role in data center operations; however, I hear from operators who are rethinking the idea that all servers will be virtualized and moving towards a more heterogeneous view of their data center. Once where they have a fleet of trucks, sports cars and El Caminos.

Of course, I’d be disingenuous if I neglected to point out that trucks are used to transport cars too. At some point, everything is metal.

Want more metal friendly reading?  See Packet CEO Zac Smith’s thinking on this topic.

My OpenStack Super User Interview [cross-post]

This post of my interview for the OpenStack Super User site originally appeared there on 3/23 under the title “OpenStack at 10: different code, same collaboration?”

With over 15 years of cloud experience, Rob Hirschfeld also goes way back with OpenStack. His involvement dates to before it was officially founded and he was also one of the initial Board Members. In addition to his role as Individual Director, Hirschfeld currently chairs the DefCore committee. He’ll be speaking about DefCore at the upcoming Vancouver Summit with Alan Clark, Egle Sigler and Sean Roberts.

He talks to Superuser about the importance of patches, priorities for 2015 and why you should care about OpenStack vendors making money.

Superuser: You’ve been with the project since before it started, where do you hope it will be in five years?

In five years, I expect that nearly every line of code will have been replaced. The thing that will endure is the community governance and interaction models that we’re working out to ensure commercial collaboration.

[3/24 Added Clarification Note: I find humbled watching traditionally open-unfriendly corporations using OpenStack to learn how to become open source collaborations.  Our governance choices will have long lasting ramifications in the industry.] 

What is something that a lot of people don’t know about OpenStack?

It was essentially a “rewrite fork” of Eucalyptus created because they would not accept patches.  That’s a cautionary tale about why accepting patches is essential that should not get lost from the history books.

Any thoughts on your first steps to the priorities you laid out in your candidacy profile?

I’ve already started to get DefCore into an execution phase with additional Board and Foundation leadership joining into the effort.  We’ve set a very active schedule of meetings with two sub-committees running in parallel…It’s going to be a busy spring.

You say that the company you founded, RackN, is not creating an OpenStack product. How are you connected to the community?

RackN supports OpenCrowbar which provides a physical ready state infrastructure for scale platforms like OpenStack. We are very engaged in the community from below by helping make other distributions, vendors and operators successful.

What are the next steps to creating the “commercially successful ecosystem” you mentioned in your candidacy profile? What are the biggest obstacles to this?

We have to make stability and scale a critical feature. This will mean slowing features and new projects; however, I hear a lot of frustration that OpenStack is not focused on delivering a solid base.

Without a base, the vendors cannot build profitable products.  Without profits, they cannot keep funding the project. This may be radical for an open project, but I think everyone needs to care more if vendors are making money.

What are some more persistent myths about the cloud?

That the word cloud really means anything.  Everyone has their own definition.  Mine is “infrastructure with an API” but I’d happily tell you it’s also about process and ops.

Who are your real-life heroes?

FIRST (For Inspiration and Recognition of Science and Technology) founders Dean Kamen and Woodie Flowers. They executed a real vision about how to train for both competition and collaboration in the next generation of engineers.  Their efforts in building the next generation of leaders really impact how we will should open source collaboration. That’s real innovation.

What do you hope to get out of the next summit?

First, I want to see vendors passing DefCore requirements.  After that, I’d like to see the operators get more equal treatment and I’m hoping to spend more time working with them so they can create places to share knowledge.

What’s your favorite/most important OpenStack debate?

There are two.  First, I think the API vs. implementation is a critical growth curve for OpenStack.  We need to mature past being so implementation driven so we can have stand alone APIs.

Second, I think the “benevolent dictator” discussion is useful. Since we are never going to have one, we need a real discussion about how to define and defend project wide priorities in a meaningful way.  Resolving both items is essential to our long-term viability.

OpenStack DefCore Process Draft Posted for Review [major milestone]

OpenStack DefCore Committee is looking for community feedback about the proposed DefCore Process.

Golden PathMarch has been a month for OpenStack DefCore milestones.  At the March Board meeting, we approved the first official DefCore Guideline (called DefCore 2015.03) and we are poised to commit the first DefCore Process draft.

Once this initial commit is approved by the DefCore Committee (expected at DefCore Scale.8 Meeting 3/25 @ 9 PT), we’ll be ready for broader input by the community using the standard OpenStack Gerrit review process.  If you are not comfortable with Gerrit, we’ll take your input anyway that you want to give it except via telepathy (we’ve already got a lot on our minds).

Note: We’re also looking for input on the 2015.next Guideline targeted for 2015.04,

The DefCore Process documents the rules (who, what, when and where) that will govern how we create the DefCore Guidelines.  By design, it has to be detailed and specific without adding complexity and confusion.  The why of DefCore is all that work we did on principles that shape the process.

This process reflects nearly a year of gestation starting from the June 2014 DefCore face-to-face.  Once of the notable recent refinements was to organize material into time phases and to be more specific about who is responsible for specific actions.

To make review easier, I’ve reposted the draft.  Comments are welcome here and on the patch (and here after it lands).

DRAFT: OpenStack DefCore Process 2015A (reposted from OpenStack/DefCore)

This document describes the DefCore process required by the OpenStack bylaws and approved by the OpenStack Technical Committee and Board.

Expected Time line:

Time Frame Milestone Activities Lead By
-3 months S-3 “preliminary” draft (from current) DefCore
-2 months S-2 ID new Capabilities Community
-1 month S-1 Score capabilities DefCore
Summit S “solid” draft Community
Advisory items selected DefCore
+1 month S+1 self-testing Vendors
+2 months S+2 Test Flagging DefCore
+3 months S+3 Approve Guidance Board

Note: DefCore may accelerate the process to correct errors and omissions.

Process Definition

Continue reading

Talking Functional Ops & Bare Metal DevOps with vBrownBag [video]

Last Wednesday (3/11/15), I had the privilege of talking with the vBrownBag crowd about Functional Ops and bare metal deployment.  In this hour, I talk about how functional operations (FuncOps) works as an extension of ready state.  FuncOps is a critical concept for providing abstractions to scale heterogeneous physical operations.

Timing for this was fantastic since we’d just worked out ESXi install capability for OpenCrowbar (it will exposed for work starting on Drill, the next Crowbar release cycle).

Here’s the brown bag:

If you’d like to see a demo, I’ve got hours of them posted:

Video Progression

Crowbar v2.1 demo: Visual Table of Contents [click for playlist]

Can Digital Workers Deliver? No. [cloud culture vs. traditional management]

In this 8 post series, Brad Szollose and Rob hirschfeld invite you to share in our discussion about failures, fights and frightening transformations going on around us as digital work changes workplace deliverables, planning and culture.

On the shouldersDigital workers will not deliver. Not if you force them into the 20th century management model then they (and you) will fail miserably; however, we believe they can outperform previous generations if guided correctly. In the 21st Century, digital technologies have fundamentally transformed both the way we work and, more importantly, how we have learned to work.

So far, we’ve framed this transformation as a generational (Boomers vs Millennials) challenge; however, workers today transcend those boundaries. We believe that we need to redefine the debate from cultural viewpoints of Boomers (authority driven leadership) and Millennials (action driven leadership). In the global, digital workforce, these perspectives transcend age.

We looked to performing music as a functional analogy for leadership.

In music, we saw very different leadership cultures at work in symphonic and jazz performances. The symphony orchestra mirrors the Boomer culture expectation of clear leadership hierarchy and top-down directed effort. The jazz band typifies the Millennial cultural norms of fluid leadership based on technical competence where the direction is a general theme and the players evolve the details. Both require technical acumen and have very clear rules for interaction with the art form. More importantly, these two extremes both produce wonderful music, but they are miles apart in execution.

Today’s workforce generations often appear the same way – unable to execute together. We believe strongly that, like symphonies and jazz concerts, both approaches have strengths and weaknesses. The challenge is to understand adapt your leadership cultural language of your performers.

That is what Brad and Rob have been discussing together for years and, now, we’d like to include you in our conversation about how Cloud Culture is transforming our work force.

DefCore Process 9 Point Graphic balances Community, Vendor, Goverance

I’ve been working on the OpenStack DefCore process for nearly 3 years and our number #1 challenge remains how to explain it simply.

10 days ago, the DefCore committee met face-to-face in Austin to work on documenting the process that we want to follow (see Guidelines).  As we codify DefCore, our top priority is getting community feedback and explaining the process without expecting everyone to read the actual nut-and-bytes of the process.

I think of it as writing the DefCore preamble: “We, the community, in order to form a more perfect cloud….”

defcore 9 pointsI don’t think we’ve reached that level of simplicity; however, we have managed to boil down our thinking into nine key points.  I’m a big Tufte fan and believe that visualizations are essential to understanding complex topics.  In my experience, it takes many many iterations with feedback to create excellent graphics.  This triangle is my first workable pass.

An earlier version of these points were presented to the OpenStack board in December 2014 and we’ve been able to refine these during the latest DefCore community discussions.

We’re interested in hearing your opinions.  Here are the current (2015-Feb-22) points:

  1. COMMUNITY INVOLVEMENT
    1. MAPPING FEATURE AVAILABILITY: We are investing in gathering data driven and community involved feedback tools to engage the largest possible base for core decisions.   This mapping information will be available to the community.
    2. COMMUNITY CHOSEN CAPABILITIES :  Going forward, we want a community process to create, cluster and describe capabilities.  DefCore bootstrapped this process for Havana.  Further, Capabilities are defined by tests in Tempest so test coverage gaps (like Keystone v2) translate into Core gaps that the community will fill by writing tests.
    3. TESTS AS TRUTH: DefCore could expand in the future, but uses Tempest as the source of tests for now.  Gaps in Test will result in DefCore gaps.  We are hosting final documents in the Gerrit, using the OpenStack review process to ensure that we work within the community processes.
  2. VENDOR
    1. CLEAR RESULTS (PASS-FAIL): Vendors must pass all required core tests as defined by this process.  There are no partial results.  Passing additional tests is encouraged but not required.
    2. VENDORS SELF-TEST: Companies are responsible for running tests and submitting to the Foundation for validation against the DefCore criteria.  Approved vendor reports will be available for the community.
    3. APPEAL PROCESS / FLAGGED TESTS: There is a “safety valve” for vendors to deal with test scenarios that are currently difficult to recreate in the field.  We expect flags to be temporary.
  3. GOVERNANCE
    1. SCORING BASED ON TRANSPARENT PROCESS (DEFCORE): The 2015 by-laws change requires the Board and TC to agree to a process by which the Foundation can hold OpenStack Vendors accountable for their use of the trademarks.
    2. BOARD IS FINAL AUTHORITY: The Board is responsible for approving the final artifacts based on the recommendations.  By having a transparent process, community input is expected in advance of that approval.
    3. TIMELY GUIDANCE: The process is time sensitive.  There’s a need for the Board to produce DefCore guidance in a timely way after each release and then feed that result into the next cycle.  The guidance is expected to be drafted for review at each Summit and then approved at the Board meeting three months after the draft is posted.

OpenStack DefCore Accelerates & Simplifies with Clear and Timely Guidelines [Feedback?]

Last week, the OpenStack DefCore committee rolled up our collective sleeves and got to work in a serious way.  We had a in-person meeting with great turn out
with 5 board members, Foundation executives/staff and good community engagement.

defcore timelineTL;DR > We think DefCore deliverables should be dated milestone guidelines instead tightly coupled to release events (see graphic).

DefCore has a single goal expressed from two sides: 1) defining the “what is OpenStack” brand for Vendors and 2) driving interoperability between OpenStack installations.  From that perspective, it is not about releases, but about testable stable capabilities.  Over time, these changes should be incremental and, most importantly, trail behind new features that are added.

For those reasons, it was becoming confusing for DefCore to focus on an “Icehouse” definition when most of the capabilities listed are “Havana” ones.  We also created significant time pressure to get the “Kilo DefCore” out quickly after the release even though there were no “Kilo” specific additions covered.

In the face-to-face, we settled on a more incremental approach.  DefCore would regularly post a set of guidelines for approval by the Board.  These Guidelines would include the required, deprecated (leaving) and advisory (coming) capabilities required for Vendors to use the mark (see footnote*).  As part of defining capabilities, we would update which capabilities were included in each component and  which components were required for the OpenStack Platform.  They would also include the relevant designated sections.  These Guidelines would use the open draft and discussion process that we are in the process of outlining for approval in Vancouver.

Since DefCore Guidelines are simple time based lists of capabilities, the vendors and community can simply reference an approved Guideline using the date of approval (for example DefCore 2015.03) and know exactly what was included.  While each Guideline stands alone, it is easy to compare them for incremental changes.

We’ve been getting positive feedback about this change; however, we are still discussing it and appreciate your input and questions.  It is very important for us to make DefCore simple and easy.  For that, your confused looks and WTF? comments are very helpful.

* footnote: the Foundation manages the OpenStack brand and the process includes multiple facets.  The DefCore Guidelines are just one part of the brand process.