While the current thinking of a testing-based definition of Core adds pressure on expanding our test suite, it seems to pass the community’s fairness checks.
Overall, the discussions lead me to believe that we’re on the right track because the discussions jump from process to impacts. It’s not too late! We’re continuing to get community feedback. So what’s next?
First…. Get involved: Upcoming Community Core Discussions
Week Before Summit: Beijing Meetup hosted by Alan Clark (details TBD)
These discussions are expected to have online access via Google Hangout. Watch Twitter when the event starts for a link.
Want to to discuss this in your meetup? Reach out to me or someone on the Board and we’ll be happy to find a way to connect with your local community!
What’s Next? Implementation!
So far, the Core discussion has been about defining the process that we’ll use to determine what is core. Assuming we move forward, the next step is to implement that process by selecting which tests are “must pass.” That means we have to both figure out how to pick the tests and do the actual work of picking them. I suspect we’ll also find testing gaps that will have developers scrambling in Ice House.
Here’s the possible (aggressive) timeline for implementation:
November: Approval of approach & timeline at next Board Meeting
January: Publish Timeline for Roll out (ideally, have usable definition for Havana)
March: Identify Havana must pass Tests (process to be determined)
April: Integration w/ OpenStack Foundation infrastructure
Obviously, there are a lot of details to work out! I expect that we’ll have an interim process to select must-pass tests before we can have a full community driven methodology.
There is still confusion around the idea that OpenStack Core requires using some of the project code. This requirement helps ensure that people claiming to be OpenStack core have a reason to contribute, not just replicate the APIs.
It’s easy to overlook that we’re trying to define a process for defining core, not core itself. We have spent a lot of time testing how individual projects may be effected based on possible outcomes. In the end, we’ll need actual data.
There are some clear anti-goals in the process that we are not ready to discuss but will clearly going to become issues quickly. They are:
Using the OpenStack name for projects that pass the API tests but don’t implement any OpenStack code. (e.g.: an OpenStack Compatible mark)
Having speciality testing sets for flavors of OpenStack that are different than core. (e.g.: OpenStack for Hosters, OpenStack Private Cloud, etc)
We need to be prepared that the list of “must pass” tests identifies a smaller core than is currently defined. It’s possible that some projects will no longer be “core”
The idea that we’re going to use real data to recommend tests as must-pass is positive; however, the time it takes to collect the data may be frustrating.
People love to lobby for their favorite projects. Gaps in testing may create problems.
We are about to put a lot of pressure on the testing efforts and that will require more investment and leadership from the Foundation.
Some people are not comfortable with self-reporting test compliance. Overall, market pressure was considered enough to punish cheaters.
There is a perceived risk of confusion as we migrate between versions. OpenStack Core for Havana seems to specific but there is concern that vendors may pass in one release and then skip re-certification. Once again, market pressure seems to be an adequate answer.
It’s not clear if a project with only 1 must-pass test is a core project. Likely, it would be considered core. Ultimately, people seem to expect that the tests will define core instead of the project boundary.
What do you think? I’d like to hear your opinions on this!
Note 11/3: The Core Definition is now maintained on the OpenStack Wiki. This list may not reflect the latest changes.
Implementations that are Core can use OpenStack trademark (OpenStack™)
This is the legal definition of “core” and the why it matters to the community.
We want to make sure that the OpenStack™ mark means something.
The OpenStack™ mark is not the same as the OpenStack brand; however, the Board uses it’s control of the mark as a proxy to help manage the brand.
Core is a subset of the whole project
The OpenStack project is supposed to be a broad and diverse community with new projects entering incubation and new implementations being constantly added. This innovation is vital to OpenStack but separate from the definition of Core.
There may be other marks that are managed separately by the foundation, and available for the platform ecosystem as per the Board’s discretion
“OpenStack API Compatible ” mark not part of this discussion and should be not be assumed.
Core definition can be applied equally to all usage models
There should not be multiple definitions of OpenStack depending on the operator (public, private, community, etc)
While expected that each deployment is identical, the differences must be quantifiable
Claiming OpenStack requiring use of designated upstream code
Implementation’s claiming the OpenStack™ mark must use the OpenStack upstream code (or be using code submitted to upstream)
You are not OpenStack, if you pass all the tests but do not use the API framework
This prevents people from using the API without joining the community
This also surfaces bit-rot in alternate implementations to the larger community
This behavior improves interoperability because there is more shared code between implementation
Projects must have an open reference implementation
OpenStack will require an open source reference base plug-in implementation for projects (if not part of OpenStack, license model for reference plug-in must be compatible).
Definition of a plug-in: alternate backend implementations with a common API framework that uses common _code_ to implement the API
This expects that projects (where technically feasible) are expected to implement a plug-in or extension architecture.
This is already in place for several projects and addresses around ecosystem support, enabling innovation
Reference plug-ins are, by definition, the complete capability set. It is not acceptable to have “core” features that are not functional in the reference plug-in
This will enable alternate implementations to offer innovative or differentiated features without forcing changes to the reference plug-in implementation
This will enable the reference to expand without forcing other alternate implementations to match all features and recertify
Vendors may substitute alternate implementations
If a vendor plug-in passes all relevant tests then it can be considered a full substitute for the reference plug-in
If a vendor plug-in does NOT pass all relevant test then the vendor is required to include the open source reference in the implementation.
Alternate implementations may pass any tests that make sense
Alternate implementations should add tests to validate new functionality.
They must have all the must-pass tests (see #10) to claim the OpenStack mark.
OpenStack Implementations are verified by open community tests
Vendor OpenStack implementations must achieve 100% of must-have coverage?
Implemented tests can be flagged as may-have requires list [Joshua McKenty]
Certifiers will be required to disclose their testing gaps.
This will put a lot of pressure on the Tempest project
Maintenance of the testing suite to become a core Foundation responsibility. This may require additional resources
Implementations and products are allowed to have variation based on publication of compatibility
Consumers must have a way to determine how the system is different from reference (posted, discovered, etc)
Testing must respond in an appropriate way on BOTH pass and fail (the wrong return rejects the entire suite)
Tests can be remotely or self-administered
Plug-in certification is driven by Tempest self-certification model
Self-certifiers are required to publish their results
Self-certified are required to publish enough information that a 3rd party could build the reference implementation to pass the tests.
Self-certified must include the operating systems that have been certified
It is preferred for self-certified implementation to reference an OpenStack reference architecture “flavor” instead of defining their own reference. (a way to publish and agree on flavors is needed)
The Foundation needs to define a mechanism of dispute resolution. (A trust but verify model)
As an ecosystem partner, you have a need to make a “works against OpenStack” statement that is supportable
API consumer can claim working against the OpenStack API if it works against any implementation passing all the “must have” tests(YES)
API consumers can state they are working against the OpenStack API with some “may have” items as requirements
API consumers are expected to write tests that validate their required behaviors (submitted as “may have” tests)
A subset of tests are chosen by the Foundation as “must-pass”
An OpenStack body will recommend which tests are elevated from may-have to must-have
The selection of “must-pass” tests should be based on quantifiable information when possible.
Must-pass tests should be selected from the existing body of “may-pass” tests. This encourages people to write tests for cases they want supported.
We will have a process by which tests are elevated from may to must lists
Potentially: the User Committee will nominate tests that elevated to the board
OpenStack Core means passing all “must-pass” tests
The OpenStack board owns the responsibility to define ‘core’ – to approve ‘musts’
We are NOT defining which items are on the list in this effort, just making the position that it is how we will define core
May-have tests include items in the integrated release, but which are not core.
Must haves – must comply with the Core criteria defined from the IncUp committee results
Projects in Incubation or pre-Incubation are not to be included in the ‘may’ list
The interoperability challenge was a major theme of the Havana Summit in Portland last week (panel I moderated) . Solving it creates significant benefits for the OpenStack community. These benefits have significant financial opportunities for the OpenStack ecosystem.
This is a journey that we are on together – it’s not a deliverable from a single company or a release that we will complete and move on.
During the session, I think we did a good job stating how we can use Heat for an RA to make incremental steps. and I had a session about upgrade (slides).
Even with all this progress, Testing for interoperability was one of the largest gaps.
The challenge is not if we should test, but how to create a set of tests that everyone will accept as adequate. Approach that goal with standardization or specification objective is likely an impossible challenge.
We should question the assumption that faithful implementation test specifications (FITS) for interoperability are only useful with a matching specification and significant API coverage. Any level of coverage provides useful information and, more importantly, visibility accelerates contributions to the test base.
I can speak from experience that this approach has merit. The Crowbar team at Dell has been including OpenStack Tempest as part of our reference deployment since Essex and it runs as part of our automated test infrastructure against every build. This process does not catch every issue, but passing Tempest is a very good indication that you’ve got the a workable OpenStack deployment.