OpenStack DefCore Update & 7/16 Community Reviews

The OpenStack Board effort to define “what is core” for commercial use (aka DefCore).  I have blogged extensively about this topic and rely on you to review that material because this post focuses on updates from recent activity.

First, Please Join Our Community DefCore Reviews on 7/16!

We’re reviewing the current DefCore process & timeline then talking about the Advisory Havana Capabilities Matrix (decoder).

To support global access, there are TWO meetings (both will also be recorded):

  1. July 16, 8 am PDT / 1500 UTC
  2. July 16, 6 pm PDT / 0100 UTC July 17

Note: I’m presenting about DefCore at OSCON on 7/21 at 11:30!

We want community input!  The Board is going discuss and, hopefully, approve the matrix at our next meeting on 7/22.  After that, the Board will be focused on defining Designated Sections for Havana and Ice House (the TC is not owning that as previously expected).

The DefCore process is gaining momentum.  We’ve reached the point where there are tangible (yet still non-binding) results to review.  The Refstack efforts to collect community test results from running clouds is underway: the Core Matrix will be fed into Refstack to validate against the DefCore required capabilities.

Now is the time to make adjustments and corrections!  

In the next few months, we’re going to be locking in more and more of the process as we get ready to make it part of the OpenStack by-laws (see bottom of minutes).

If you cannot make these meetings, we still want to hear from you!  The most direct way to engage is via the DefCore mailing list but 1×1 email works too!  Your input is import to us!

OpenStack Core Definition (DefCore) Progress in 6 key areas

DefCore Elephant Cycle

I’m excited to report about the OpenStack Board progress on defining OpenStack core.  At the Hong Kong summit, Joshua McKenty and I were asked to chair a new standing committee, now known as DefCore, to define “OpenStack Core” based on the core principles that we determined over the last 6 months (aka “the spider”).

Joshua and I took on the challenge with gusto and I’m proud to say that we’ve already made significant progress against an aggressive timeline to have the pilot must-pass tests for Havana defined before the Juno Summit in April 2014.  It’s important to remember that we’re moving from a project based definition of core to test-driven capabilities because this best addresses our interoperability objectives.

In the 8 weeks since the summit, we’ve had six very productive meetings (etherpads for Prep, DefCore.1, DefCore.2, Criteria 1 and 2) with detailed notes and recorded content. Here’s my summary of our results so far:

  1. An Aggressive Timeline for having pilot Havana must-pass tests approved by the Juno summit in May 2014.  That drives the schedule backward toward a preliminary list in March.  Once we have a pilot list for Havana, we expect to have Ice House done +90 days and Juno done at the Paris summit.

  2. Test Selection Criteria a preliminary set of 14 criteria (needs a stand alone post) that will be used to quantitatively score the current 700+ tests.  We also agreed to use a max 100 point weighting system for the criteria.  The weights and score requirement iteratively once we have done a first scoring pass.  Our objective is to make must-pass test selection as objective and transparent as possible (post with details).

  3. Distinction between Capability & Test is important because we recognize that individual tests may validate multiple capabilities and individual capabilities may have multiple tests.  Our hope is to present the results in terms of capabilities not individual tests.

  4. Holding Off on Bylaws Changes needed to clarify how OpenStack manage core definition.  It was widely expected that the DefCore committee would have to make changes to the OpenStack bylaws; however, we believe we can proceed without rushing changes.  We have an active subcommittee preparing changes in advance of the next DefCore cycle.

  5. Program vs. Project Definition efforts are needed to help take pressure off requests to have “projects promoted to core status” and how the OpenStack trademark is used for projects.  We are trying to clarify OpenStack Programs (e.g.: OpenStack™ Compute) carry to the trademark while OpenStack Projects (e.g.: Nova and Glace) are members of those programs and do not carry the OpenStack trademark directly.  Consequently, we’d expect people to say “OpenStack Compute Project Nova” instead of “OpenStack Nova.”  This approach addresses several issues that impact DefCore Board activities around trademark, core and brand.

  6. RefStack Development and Use Cases provide the framework for community reporting of test results.  We consider this infrastructure critical to getting community input about must-pass tests and also sharing interoperability information.  This effort is just beginning and needs help from the community.

For all this progress, we are only starting!  We’ve cleared the blocks preventing implementation and that will expose a new set issues to discuss.  Look for us to start applying the criteria to tests in the next months.  That will quickly expose the strengths and weaknesses of our criteria set.  We’ve also got to make progress on Program vs. Project and start RefStack coding.

We want community participation!  Please let us know what you think.

OpenStack steps toward Interopability with Temptest, RAs & RefStack.org

Pipes are interoperableI’m a cautious supporter of OpenStack leading with implementation (over API specification); however, it clearly has risks. OpenStack has the benefit of many live sites operating at significant scale. The short term cost is that those sites were not fully interoperable (progress is being made!). Even if they were, we are lack the means to validate that they are.

The interoperability challenge was a major theme of the Havana Summit in Portland last week (panel I moderated) .  Solving it creates significant benefits for the OpenStack community.  These benefits have significant financial opportunities for the OpenStack ecosystem.

This is a journey that we are on together – it’s not a deliverable from a single company or a release that we will complete and move on.

There were several themes that Monty and I presented during Heat for Reference Architectures (slides).  It’s pretty obvious that interop is valuable (I discuss why you should care in this earlier post) and running a cloud means dealing with hardware, software and ops in equal measures.  We also identified lots of important items like Open OperationsUpstreamingReference Architecture/Implementation and Testing.

During the session, I think we did a good job stating how we can use Heat for an RA to make incremental steps.   and I had a session about upgrade (slides).

Even with all this progress, Testing for interoperability was one of the largest gaps.

The challenge is not if we should test, but how to create a set of tests that everyone will accept as adequate.  Approach that goal with standardization or specification objective is likely an impossible challenge.

Joshua McKenty & Monty Taylor found a starting point for interoperability FITS testing: “let’s use the Tempest tests we’ve got.”

We should question the assumption that faithful implementation test specifications (FITS) for interoperability are only useful with a matching specification and significant API coverage.  Any level of coverage provides useful information and, more importantly, visibility accelerates contributions to the test base.

I can speak from experience that this approach has merit.  The Crowbar team at Dell has been including OpenStack Tempest as part of our reference deployment since Essex and it runs as part of our automated test infrastructure against every build.  This process does not catch every issue, but passing Tempest is a very good indication that you’ve got the a workable OpenStack deployment.