OpenStack DefCore Update & 7/16 Community Reviews

The OpenStack Board effort to define “what is core” for commercial use (aka DefCore).  I have blogged extensively about this topic and rely on you to review that material because this post focuses on updates from recent activity.

First, Please Join Our Community DefCore Reviews on 7/16!

We’re reviewing the current DefCore process & timeline then talking about the Advisory Havana Capabilities Matrix (decoder).

To support global access, there are TWO meetings (both will also be recorded):

  1. July 16, 8 am PDT / 1500 UTC
  2. July 16, 6 pm PDT / 0100 UTC July 17

Note: I’m presenting about DefCore at OSCON on 7/21 at 11:30!

We want community input!  The Board is going discuss and, hopefully, approve the matrix at our next meeting on 7/22.  After that, the Board will be focused on defining Designated Sections for Havana and Ice House (the TC is not owning that as previously expected).

The DefCore process is gaining momentum.  We’ve reached the point where there are tangible (yet still non-binding) results to review.  The Refstack efforts to collect community test results from running clouds is underway: the Core Matrix will be fed into Refstack to validate against the DefCore required capabilities.

Now is the time to make adjustments and corrections!  

In the next few months, we’re going to be locking in more and more of the process as we get ready to make it part of the OpenStack by-laws (see bottom of minutes).

If you cannot make these meetings, we still want to hear from you!  The most direct way to engage is via the DefCore mailing list but 1×1 email works too!  Your input is import to us!

Understanding OpenStack Designated Code Sections – Three critical questions

A collaboration with Michael Still (TC Member from Rackspace) & Joshua McKenty and Cross posted by Rackspace.

After nearly a year of discussion, the OpenStack board launched the DefCore process with 10 principles that set us on path towards a validated interoperability standard.   We created the concept of “designated sections” to address concerns that using API tests to determine core would undermine commercial and community investment in a working, shared upstream implementation.

Designated SectionsDesignated sections provides the “you must include this” part of the core definition.  Having common code as part of core is a central part of how DefCore is driving OpenStack operability.

So, why do we need this?

From our very formation, OpenStack has valued implementation over specification; consequently, there is a fairly strong community bias to ensure contributions are upstreamed. This bias is codified into the very structure of the GNU General Public License (GPL) but intentionally missing in the Apache Public License (APL v2) that OpenStack follows.  The choice of Apache2 was important for OpenStack to attract commercial interests, who often consider GPL a “poison pill” because of the upstream requirements.

Nothing in the Apache license requires consumers of the code to share their changes; however, the OpenStack foundation does have control of how the OpenStack™ brand is used.   Thus it’s possible for someone to fork and reuse OpenStack code without permission, but they cannot called it “OpenStack” code.  This restriction only has strength if the OpenStack brand has value (protecting that value is the primary duty of the Foundation).

This intersection between License and Brand is the essence of why the Board has created the DefCore process.

Ok, how are we going to pick the designated code?

Figuring out which code should be designated is highly project specific and ultimately subjective; however, it’s also important to the community that we have a consistent and predictable strategy.  While the work falls to the project technical leads (with ratification by the Technical Committee), the DefCore and Technical committees worked together to define a set of principles to guide the selection.

This Technical Committee resolution formally approves the general selection principles for “designated sections” of code, as part of the DefCore effort.  We’ve taken the liberty to create a graphical representation (above) that visualizes this table using white for designated and black for non-designated sections.  We’ve also included the DefCore principle of having an official “reference implementation.”

Here is the text from the resolution presented as a table:

Should be DESIGNATED: Should NOT be DESIGNATED:
  • code provides the project external REST API, or
  • code is shared and provides common functionality for all options, or
  • code implements logic that is critical for cross-platform operation
  • code interfaces to vendor-specific functions, or
  • project design explicitly intended this section to be replaceable, or
  • code extends the project external REST API in a new or different way, or
  • code is being deprecated

The resolution includes the expectation that “code that is not clearly designated is assumed to be designated unless determined otherwise. The default assumption will be to consider code designated.”

This definition is a starting point.  Our next step is to apply these rules to projects and make sure that they provide meaningful results.

Wow, isn’t that a lot of code?

Not really.  Its important to remember that designated sections alone do not define core: the must-pass tests are also a critical component.   Consequently, designated code in projects that do not have must-pass tests is not actually required for OpenStack licensed implementation.

How DefCore is going to change your world: three advisory cases

The first release of the DefCore Core Capabilities Matrix (DCCM) was revealed at the Atlanta summit.  At the Summit, Joshua and I had a session which examined what this means for the various members of the OpenStack community.   This rather lengthy post reviews the same advisory material.

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled “OpenStack.”

As a refresher, there are three uses of the OpenStack mark:

  • Community: The non-commercial use of the word OpenStack by the OpenStack community to describe themselves and their activities. (like community tweets, meetups and blog posts)
  • Code: The non-commercial use of the word OpenStack to refer to components of the OpenStack framework integrated release (as in OpenStack Compute Project Nova)
  • Commerce: The commercial use of the word OpenStack to refer to products and services as governed by the OpenStack trademark policy. This is where DefCore is focused.

In the DefCore/Commerce use, properly licensed vendors have three basic obligations to meet:is_it_openstack_graphic

  1. Pass the required Refstack tests for the capabilities matrix in the version of OpenStack that they use. Vendors are expected (not required) to share their results.
  2. Run and include the “designated sections” of code for the OpenStack components that you include.
  3. Other basic obligations in their license agreement like being a currently paid up corporate sponsor or foundation member, etc.

If they meet these conditions, vendors can use the OpenStack mark in their product names and descriptions.

Enough preamble!  Let’s see the three Advisory Cases

MANDATORY DISCLAIMER: These conditions apply to fictional public, private and client use cases.  Any resemblence to actual companies is a function of the need to describe real use-cases.  These cases are advisory for illustration use only and are not to be considered definitive guidenance because DefCore is still evolving.

Public Cloud: Service Provider “BananaCloud”

A popular public cloud operator, BananaCloud has been offering OpenStack-based IaaS since the Diablo release. However, they don’t use the Keystone component. Since they also offer traditional colocation and managed services, they have an existing identity management system that they use. They made a similar choice for Horizon in favor of their own cloud portal.

banana

  1. They use Nova a custom scheduler and pass all the Nova tests. This is the simplest case since they use code and pass the tests.
  2. In the Havana DCCM, the Keystone capabilities are a must pass test; however, there are no designated sections of code for Keystone. So BananaCloud must implement a Keystone-compatible API on their IaaS environment (an effort they had underway already) that will pass Refstack, and they’re good to go.
  3. There are no must pass tests for Horizon so they have no requirements to include those features or code. They can still be OpenStack without Horizon.
  4. There are no must pass tests for Trove so they have no brand requirements to include those features or code so it’s not a brand issue; however, by using Trove and promoting its use, they increase the likelihood of its capabilities becoming must pass features.

BananaCloud also offers some advanced OpenStack capabilities, including Marconi and Trove. Since there are no must pass capabilities from these components in the Havana DCCM, it has no impact on their offering additional services. DefCore defines the minimum requirements and encourages vendors to share their full test results of additional capabilities because that is how OpenStack identifies new must pass candidates.

Note: The DefCore DCCM is advisory for the Havana release, so if BananaCloud is late getting their Keystone-compatibility work done there won’t be any commercial impact. But it will be a binding part of the trademark license agreement by the Juno release, which is only 6 months away.

Private Cloud: SpRocket Small-Business OpenStack Software

SpRocket is a new OpenStack software vendor, specializing in selling a Windows-powered version of OpenStack with tight integration to Sharepoint and AzurePack. In their feature set, they only need part of Nova and provide an alternative object storage to Swift that implements a version of the Swift API. They do use Heat as part of their implementation to set up applications back ended by Sharepoint and AzurePack.

  1. sprocketFor Nova, they already use the code and have already implemented all required capabilities except for the key-store. To comply with the DefCore requirement, they must enable the key-store capability.
  2. While their implementation of Swift passes the tests, We are still working to resolve the final disposition of Swift so there are several possible outcomes:
    1. If Swift is 0% designated then they are OK (that’s illustrated here)
    2. If Swift is 100% designated then they cannot claim to be OpenStack.
    3. If Swift is partially designated then they have to adapt their deploy to include the required code.
  3. Their use of Heat is encouraged since it is an integrated project; however, there are no required capabilities and does not influence their ability to use the mark.
  4. They use the trunk version of Windows HyperV drivers which are not designated and have no specific tests.

Ecosystem Client: “Mist” OpenStack-consuming Client Library

Mist is a client library for load+kt programmers working on applications using the OpenStack APIs. While it’s an open source project, there are many commercial applications that use the library for their applications. Unlike a “pure” OpenStack program, it also supports other Cloud APIs.

Since the Mist library does not ship or implement the OpenStack code base, the DefCore process does not apply to their effort; however, there are several important intersections with Mist and OpenStack and Core.

  • First, it is very important for the DefCore process that Mist map their use of the OpenStack APIs to the capabilities matrix. They are asked to help with this process because they are the best group to answer the “works with clients” criteria.
  • Second, if there are APIs used by Mist that are not currently tested then the OpenStack community should work with the Mist community to close those test gaps.
  • Third, if Mist relies on an API that is not must-pass they are encouraged to help identify those capabilities as core candidates in the community.

OpenStack DefCore Matrix Cheet Sheet

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating minimum standards for products labeled “OpenStack.”

In the last week, the DefCore committee release the results of 6 months of work.  We choose to getting input in favor of cleanups and polish, so please be patient if some of the data is overwhelming.

We’ve got enough feedback to put together this capabilities matrix cheat sheet to help the interpret all the colors and data on the page (headers link).

capabilities_matrix_explained

DefCore Capabilities Scorecard & Core Identification Matrix [REVIEW TIME!]

Attribution Note: This post was collaboratively edited by members of the DefCore committee and cross posted with DefCore co-chair Joshua McKenty of Piston Cloud.

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating minimum standards minimum standards for products labeled “OpenStack.”

The OpenStack Core definition process (aka DefCore) is moving steadily along and we’re looking for feedback from community as we move into the next phase.  Until now, we’ve been mostly working out principles, criteria and processes that we will use to answer “what is core” in OpenStack.  Now we are applying those processes and actually picking which capabilities will be used to identify Core.

TL;DR! We are now RUNNING WITH SCISSORS because we’ve reached the point there you can review early thoughts about what’s going to be considered Core (and what’s not).  We now have a tangible draft list for community review.

capabilities_selectionWhile you will want to jump directly to the review draft matrix (red means needs input), it is important to understand how we got here because that’s how DefCore will resolve the inevitable conflicts.  The very nature of defining core means that we have to say “not in” to a lot of capabilities.  Since community consensus seems to favor a “small core” in principle, that means many capabilities that people consider important are not included.

The Core Capabilities Matrix attempts to find the right balance between quantitative detail and too much information.  Each row represents an “OpenStack Capability” that is reflected by one or more individual tests.  We scored each capability equally on a 100 point scale using 12 different criteria.  These criteria were selected to respect different viewpoints and needs of the community ranging from popularity, technical longevity and quality of documentation.

While we’ve made the process more analytical, there’s still room for judgement.  Eventually, we expect to weight some criteria more heavily than others.  We will also be adjusting the score cut-off.  Our goal is not to create a perfect evaluation tool – it should inform the board and facilitate discussion.  In practice, we’ve found this approach to bring needed objectivity to the selection process.

So, where does this take us?  The first matrix is, by design, old news.  We focused on getting a score for Havana to give us a stable and known quantity; however, much of that effort will translate forward.  Using Havana as the base, we are hoping to score Ice House ninety days after the Juno summit and score Juno at K Summit in Paris.

These are ambitious goals and there are challenges ahead of us.  Since every journey starts with small steps, we’ve put ourselves on feet the path while keeping our eyes on the horizon.

Specifically, we know there are gaps in OpenStack test coverage.  Important capabilities do not have tests and will not be included.  Further, starting with a small core means that OpenStack will be enforcing an interoperability target that is relatively permissive and minimal.  Universally, the community has expressed that including short-term or incomplete items is undesirable.  It’s vital to remember that we are looking for evolutionary progress that accelerates our developer, user, operator and ecosystem communities.

How can you get involved?  We are looking for community feedback on the DefCore list on this 1st pass – we do not think we have the scores 100% right.  Of course, we’re happy to hear from you however you want to engage: in intentionally named the committed “defcore” to make it easier to cross-reference and search.

We will eventually use Refstack to collect voting/feedback on capabilities directly from OpenStack community members.

Open Operations [4/4 series on Operating Open Source Infrastructure]

This post is the final in a 4 part series about Success factors for Operating Open Source Infrastructure.

tl;dr Note: This is really TWO tightly related posts: 
  part 1 is OpenOps background. 
  part 2 is about OpenStack, Tempest and DefCore.

2012-01-11_17-42-11_374One of the substantial challenges of large-scale deployments of open source software is that it is very difficult to come up with a best practice, or a reference implementation that can be widely explained or described by the community.

Having a best practice deployment is essential for the growth of the community because it enables multiple people to deploy the software in a repeatable, stable way. This, in turn, fosters community growth so that more people can adopt software in a consistent way. It does little good if operators have no consistent pattern for deployment, because that undermines the developers’ abilities to extend, the testers’ abilities to ensure quality, and users’ ability to repeat the success of others.

Fundamentally, the goal of an open source project, from a user’s perspective, is that they can quickly achieve and repeat the success of other people in the community.

When we look at these large-scale projects we really try to create a pattern of success that can be repeated over and over again. This ensures growth of the user base, and it also helps the developer reduce time spent troubleshooting problems.

That does not mean that every single deployment should be identical, but there is substantial value in having a limited number of success patterns. Customers can then be assured not only of quick time to value with these projects., They can also get help without having everybody else in the community attempt to untangle how one person created a site-specific. This is especially problematic if someone created an unnecessarily unique scenario. That simply creates noise and confusion in the environment., Noise is a huge cost for the community, and needs to be eliminated nor an open source project to flourish.

This isn’t any different from in proprietary software but most of these activities are hidden. A proprietary project vendor can make much stronger recommendations and install guidance because they are the only source of truth in that project. In an open source project, there are multiple sources of truth, and there are very few people who are willing to publish their exact reference implementation or test patterns. Consequently, my team has taken a strong position on creating a repeatable reference implementation for Openstack deployments, based on extensive testing. We have found that our test patterns and practices are grounded in successful customer deployments and actual, physical infrastructure deployments. So, they are very pragmatic, repeatable, and sustained.

We found that this type of testing, while expensive, is also a significant value to our customers, and something that they appreciate and have been willing to pay for.

OpenStack as an Example: Tempest for Reference Validation

The Crowbar project incorporated OpenStack Tempest project as an essential part of every OpenStack deployment. From the earliest introduction of the Tempest suite, we have understood the value of a baselining test suite for OpenStack. We believe that using the same tests the developers use for a single node test is a gate for code acceptance against a multi-node deployment, and creates significant value both for our customers and the OpenStack project as a whole.  This was part of my why I embraced the suggestion of basing DefCore on tests.

While it is important to have developer tests that gate code check-ins, the ultimate goal for OpenStack is to create scale-out multi-node deployments. This is a fundamental design objective for OpenStack.

With developers and operators using the same test suite, we are able to proactively measure the success of the code in the scale deployments in a way that provides quick feedback for the developers. If Tempest tests do not pass a multi-node environment, they are not providing significant value for developers to ensure that their code is operating against best practice scenarios. Our objective is to continue to extend the Tempest suite of tests so that they are an accurate reflection of the use cases that are encountered in a best practice, referenced deployment.

Along these lines, we expect that the community will continue to expand the Tempest test suite to match actual deployment scenarios reflected in scale and multi-node configurations. Having developers be responsible for passing these tests as part of their day-to-day activities ensures that development activities do not disrupt scale operations. Ultimately, making proactive gating tests ensures that we are creating scenarios in which code quality is continually increasing, as is our ability to respond and deploy the OpenStack infrastructure.

I am very excited and optimistic that the expanding the Tempest suite holds the key to making OpenStack the most stable, reliable, performance cloud implementation available in the market. The fact that this test suite can be extended in the community, and contributed to by a broad range of implementations, only makes that test suite more valuable and more likely to fully encompass all use cases necessary for reference implementations.

DefCore Core Capabilities Selection Criteria SIMPLIFIED -> how we are picking Core

I’ve posted about the early DefCore core capabilities selection process before and we’ve put them into application and discussed them with the community.  The feedback was simple: tl;dr.  You’ve got the right direction but make it simpler!

So we pulled the 12 criteria into four primary categories:

  1. Usage: the capability is widely used (Refstack will collect data)
  2. Direction: the capability advances OpenStack technically
  3. Community: the capability builds the OpenStack community experience
  4. System: the capability integrates with other parts of OpenStack

These categories summarize critical values that we want in OpenStack and so make sense to be the primary factors used when we select core capabilities.  While we strive to make the DefCore process objective and quantitive, we must recognize that these choices drive community behavior.

With this perspective, let’s review the selection criteria.  To make it easier to cross reference, we’ve given each criteria a shortened name:

Shows Proven Usage

  • Widely Deployed” Candidates are widely deployed capabilities.  We favor capabilities that are supported by multiple public cloud providers and private cloud products.
  • “Used by Tools” Candidates are widely used capabilities:Should be included if supported by common tools (RightScale, Scalr, CloudForms, …)
  • Used by Clients” Candidates are widely used capabilities: Should be included if part of common libraries (Fog, Apache jclouds, etc)
Aligns with Technical Direction
  • Future Direction” Should reflect future technical direction (from the project technical teams and the TC) and help manage deprecated capabilities.
  • “Stable” Test is required stable for >2 releases because we don’t want core capabilities that do not have dependable APIs.
  • “Complete” Where the code being tested has a designated area of alternate implementation (extension framework) as per the Core Principles, there should be parity in capability tested across extension implementations.  This also implies that the capability test is not configuration specific or locked to non-open technology.

Plays Well with Others

  • “Discoverable” Capability being tested is Service Discoverable (can be found in Keystone and via service introspection)
  • “Doc’d” Should be well documented, particularly the expected behavior.  This can be a very subjective measure and we expect to refine this definition over time.
  • “Core in Last Release”  A test that is a must-pass test should stay a must-pass test.  This make makes core capabilities sticky release per release.  Leaving Core is disruptive to the ecosystem

Takes a System View

  • Foundation” Test capabilities that are required by other must-pass tests and/or depended on by many other capabilities
  • “Atomic” Capabilities is unique and cannot be build out of other must-pass capabilities
  • “Proximity” (sometimes called a Test Cluster) selects for Capabilities that are related to Core Capabilities.  This helps ensure that related capabilities are managed together.

Note: The 13th “non-admin” criteria has been removed because Admin APIs cannot be used for interoperability and cannot be considered Core.

Success Factors of Operating Open Source Infrastructure [Series Intro]

2012-10-28_14-13-24_502Building a best practices platform is essential to helping companies share operations knowledge.   In the fast-moving world of open source software, sharing documentation about what to do is not sufficient.  We must share the how to do it also because the operations process is tightly coupled to achieving ongoing success.

Further, since change is constant, we need to change our definition of “stability” to reflect a much more iterative and fluid environment.

Baseline testing is an essential part of this platform. It enables customers to ensure not only fast time to value, but the tests are consistently conforming with industry best practices, even as the system is upgraded and migrates towards a continuous deployment infrastructure.

The details are too long for a single post so I’m going to explore this as three distinct topics over the next two weeks.

  1. Reference Deployments talks about needed an automated way to repeat configuration between sites.
  2. Ops Validation using Development Tests talks about having a way to verify that everyone uses a common reference platform
  3. Shared Open Operatons / DevOps (pending) talks about putting reference deployment and common validation together to create a true open operations practice.

OpenStack, Hadoop, Ceph, Docker and other open source projects are changing the landscape for information technology. Customers seeking to become successful with these evolving platforms must look beyond the software bits, and consider both the culture and operations.  The culture is critical because interacting with the open source projects community (directly or through a proxy) can help ensure success using the software. Operations are critical because open source projects expect the community to help find and resolve issues. This results in more robust and capable products. Consequently, users of open source software must operate in a more fluid environment.

My team at Dell saw this need as we navigated the early days of OpenStack.  The Crowbar project started because we saw that the community needed a platform that could adapt and evolve with the open source projects that our advanced customers were implementing. Our ability to deliver an open operations platform enables the community to collaborate, and to skip over routine details to refocus on shared best practices.

My recent focus on the OpenStack DefCore work reinforces these original goals.  Using tests to help provide a common baseline is a concrete, open and referenceable way to promote interoperability.  I hope that this in turn drives a dialog around best practices and shared operations because those help mature the community.

Running with scissors > DefCore “must-pass” Road Show Starts [VIDEOS]

The OpenStack DefCore committee has been very active during this cycle turning the core definition principles into an actual list of “must-pass” capabilities (working page).  This in turn gives the community something tangible enough to review and evaluate.

Capabilities SelectionTL;DR!  We appreciate those in the community who have been patient enough to help define and learn the process we’re using the make selections; however, we also recognize that most people want to jump to the results.

This week, we started a “DefCore roadshow” with the goal of learning how to make this huge body of capabilities, process and impact easier to digest (draft write-up for review & Troy Toman’s notes).  So far we’ve had two great sessions on this topic.  We took notes and recorded at both meetups (San Francisco & Austin).

My takeaways of these initial meetups are:

  • Jump to the Capabilities right away, the process history is not needed up front
  • You need more graphics – specifically, one for the selection criteria (what do you think of my 1st attempt?)
  • Work from some examples of scored capabilities
  • Include some specific use-cases with a user, 2 types of private cloud and a public cloud to help show the impact

Overall, people like what they are hearing.  It makes sense and decisions are justified.

We need more feedback!  Please help us figure out how to explain this for the broader community.

OpenStack Board Elections: What I’ll do in 2014: DefCore, Ops, & Community

Rob HirschfeldOpenStack Community,

The time has come for you to choose who will fill the eight community seats on the Board (ballot links went out Sunday evening CST).  I’ve had the privilege to serve you in that capacity for 16 months and would like to continue.  I have leadership role in Core Definition and want to continue that work.

Here are some of the reasons that I am a strong board member:

  • Proven & Active Leadership on Board – I have been very active and vocal representing the community on the Board.  In addition to my committed leadership in Core Definition, I have played important roles shaping the Gold Member grooming process and trying to adjust our election process.  I am an outspoken yet pragmatic voice for the community in board meetings.
  • Technical Leader but not on the TC – The Board needs members who are technical yet detached from the individual projects enough to represent outside and contrasting views.
  • Strong User Voice – As the senior OpenStack technologist at Dell, I have broad reach in Dell and RedHat partnership with exposure to a truly broad and deep part of the community.  This makes me highly accessible to a lot of people both in and entering the community.
  • Operations Leadership – Dell was an early leader in OpenStack Operations (via OpenCrowbar) and continues to advocate strongly for key readiness activities like upgrade and high availability.  In addition, I’ve led the effort to converge advanced cookbooks from the OpenCrowbar project into the OpenStack StackForge upstreams.  This is not a trivial effort but the right investment to make for our community.
  • And there’s more… you can read about my previous Board history in my 2012 and 2013 “why vote for me” posts or my general OpenStack comments.

And now a plea to vote for other candidates too!

I had hoped that we could change the election process to limit blind corporate affinity voting; however, the board was not able to make this change without a more complex set of bylaws changes.  Based on the diversity and size of OpenStack community, I hope that this issue may no longer be a concern.  Even so, I strongly believe that the best outcome for the OpenStack Board is to have voters look beyond corporate affiliation and consider a range of factors including business vs. technical balance, open source experience, community exposure, and ability to dedicate time to OpenStack.