I’m a cautious supporter of OpenStack leading with implementation (over API specification); however, it clearly has risks. OpenStack has the benefit of many live sites operating at significant scale. The short term cost is that those sites were not fully interoperable (progress is being made!). Even if they were, we are lack the means to validate that they are.
The interoperability challenge was a major theme of the Havana Summit in Portland last week (panel I moderated) . Solving it creates significant benefits for the OpenStack community. These benefits have significant financial opportunities for the OpenStack ecosystem.
This is a journey that we are on together – it’s not a deliverable from a single company or a release that we will complete and move on.
There were several themes that Monty and I presented during Heat for Reference Architectures (slides). It’s pretty obvious that interop is valuable (I discuss why you should care in this earlier post) and running a cloud means dealing with hardware, software and ops in equal measures. We also identified lots of important items like Open Operations, Upstreaming, Reference Architecture/Implementation and Testing.
During the session, I think we did a good job stating how we can use Heat for an RA to make incremental steps. and I had a session about upgrade (slides).
Even with all this progress, Testing for interoperability was one of the largest gaps.
The challenge is not if we should test, but how to create a set of tests that everyone will accept as adequate. Approach that goal with standardization or specification objective is likely an impossible challenge.
Joshua McKenty & Monty Taylor found a starting point for interoperability FITS testing: “let’s use the Tempest tests we’ve got.”
We should question the assumption that faithful implementation test specifications (FITS) for interoperability are only useful with a matching specification and significant API coverage. Any level of coverage provides useful information and, more importantly, visibility accelerates contributions to the test base.
I can speak from experience that this approach has merit. The Crowbar team at Dell has been including OpenStack Tempest as part of our reference deployment since Essex and it runs as part of our automated test infrastructure against every build. This process does not catch every issue, but passing Tempest is a very good indication that you’ve got the a workable OpenStack deployment.
Pingback: Dell Open Source Ecosystem Digest #16. Issue Highlight: ChefConf 2013 Sessions - Dell TechCenter - TechCenter - Dell Community
Pingback: Thanks! I’m enjoying my conversation with you | Rob Hirschfeld