OpenStack needs a strong feedback loop from users and operators back to developers and vendors – statement made during the PM meeting.
The most critical wins from last week what the desire for the PM group to work more closely with the OpenStack technical leadership. I’m excited to see the community continue to expand the scope of collaboration.
Why is this important? Because developers and product managers need mutual respect to be effective.
The members of the Product team are leaders within their own organization responsible for talking to users and operators. We rely on them to close the communication loop by both collecting feedback and explaining direction. To accomplish this difficult job, the Product team must own articulating a vision for the future.
For OpenStack to succeed, we need to be listening intently to feedback about both how we are doing and if we are headed in the right direction. Both are required to create a feedback loop.
After seeing this group in action, I’m excited to see what’s next.
Building cloud infrastructure requires a rock-solid foundation.
In this hour, Rob Hirschfeld will demo automated tooling, specifically OpenCrowbar, to prepare and integrate physical infrastructure to ready state and then use PackStack to install OpenStack.
The OpenCrowbar project started in 2011 as an OpenStack installer and had grown into a general purpose provisioning and infrastructure orchestration framework that works in parallel with multiple hardware vendors, operating systems and devops tools. These tools create a fast, durable and repeatable environment to install OpenStack, Ceph, Kubernetes, Hadoop or other scale platforms.
Rob will show off the latest features and discuss key concepts from the Crowbar operational model including Ready State, Functional Operations and Late Binding. These concepts, built into Crowbar, can be applied generally to make your operations more robust and scalable.
TL;DR! We appreciate those in the community who have been patient enough to help define and learn the process we’re using the make selections; however, we also recognize that most people want to jump to the results.
While the current thinking of a testing-based definition of Core adds pressure on expanding our test suite, it seems to pass the community’s fairness checks.
Overall, the discussions lead me to believe that we’re on the right track because the discussions jump from process to impacts. It’s not too late! We’re continuing to get community feedback. So what’s next?
First…. Get involved: Upcoming Community Core Discussions
Week Before Summit: Beijing Meetup hosted by Alan Clark (details TBD)
These discussions are expected to have online access via Google Hangout. Watch Twitter when the event starts for a link.
Want to to discuss this in your meetup? Reach out to me or someone on the Board and we’ll be happy to find a way to connect with your local community!
What’s Next? Implementation!
So far, the Core discussion has been about defining the process that we’ll use to determine what is core. Assuming we move forward, the next step is to implement that process by selecting which tests are “must pass.” That means we have to both figure out how to pick the tests and do the actual work of picking them. I suspect we’ll also find testing gaps that will have developers scrambling in Ice House.
Here’s the possible (aggressive) timeline for implementation:
November: Approval of approach & timeline at next Board Meeting
January: Publish Timeline for Roll out (ideally, have usable definition for Havana)
March: Identify Havana must pass Tests (process to be determined)
April: Integration w/ OpenStack Foundation infrastructure
Obviously, there are a lot of details to work out! I expect that we’ll have an interim process to select must-pass tests before we can have a full community driven methodology.
There is still confusion around the idea that OpenStack Core requires using some of the project code. This requirement helps ensure that people claiming to be OpenStack core have a reason to contribute, not just replicate the APIs.
It’s easy to overlook that we’re trying to define a process for defining core, not core itself. We have spent a lot of time testing how individual projects may be effected based on possible outcomes. In the end, we’ll need actual data.
There are some clear anti-goals in the process that we are not ready to discuss but will clearly going to become issues quickly. They are:
Using the OpenStack name for projects that pass the API tests but don’t implement any OpenStack code. (e.g.: an OpenStack Compatible mark)
Having speciality testing sets for flavors of OpenStack that are different than core. (e.g.: OpenStack for Hosters, OpenStack Private Cloud, etc)
We need to be prepared that the list of “must pass” tests identifies a smaller core than is currently defined. It’s possible that some projects will no longer be “core”
The idea that we’re going to use real data to recommend tests as must-pass is positive; however, the time it takes to collect the data may be frustrating.
People love to lobby for their favorite projects. Gaps in testing may create problems.
We are about to put a lot of pressure on the testing efforts and that will require more investment and leadership from the Foundation.
Some people are not comfortable with self-reporting test compliance. Overall, market pressure was considered enough to punish cheaters.
There is a perceived risk of confusion as we migrate between versions. OpenStack Core for Havana seems to specific but there is concern that vendors may pass in one release and then skip re-certification. Once again, market pressure seems to be an adequate answer.
It’s not clear if a project with only 1 must-pass test is a core project. Likely, it would be considered core. Ultimately, people seem to expect that the tests will define core instead of the project boundary.
What do you think? I’d like to hear your opinions on this!
Last week, my team at Dell led a world-wide OpenStack Essex Deploy event. Kamesh Pemmaraju, our OpenStack-powered solution product manager, did a great summary of the event results (200+ attendees!). What started as a hack-a-thon for deploy scripts morphed into a stunning 14+ hour event with rotating intro content and an ecosystem showcase (videos). Special kudos to Kamesh, Andi Abes, Judd Maltin, Randy Perryman & Mike Pittaro for leadership at our regional sites.
Clearly, OpenStack is attracting a lot of interest. We’ve been investing time in content to help people who are curious about OpenStack to get started.
On that measure, we have room for improvement. We had some great discussions about how to handle upgrades and market drivers for OpenStack; however, we did not spend the time improving Essex deployments that I was hoping to achieve. I know it’s possible – I’ve talked with developers in the Crowbar community who want this.
If you wanted more expert interaction, here are some of my thoughts for future events.
Expert track did not get to deploy coding. I think that we need to simply focus more even tightly on to Crowbar deployments. That means having a Crowbar Hack with an OpenStack focus instead of vice versa.
Efforts to serve OpenStack n00bs did not protect time for experts. If we offer expert sessions then we won’t try to have parallel intro sessions. We’ll simply have to direct novices to the homework pages and videos.
Combining on-site and on-line is too confusing. As much as I enjoy meeting people face-to-face, I think we’d have a more skilled audience if we kept it online only.
Connectivity! Dropped connections, sigh.
Better planning for videos (not by the presenters) to make sure that we have good results on the expert track.
This event was too long. It’s just not practical to serve Europe, US and Asia in a single event. I think that 2-3 hours is a much more practical maximum. 10-12am Eastern or 6-8pm Pacific would be much more manageable.
Do you have other comments and suggestions? Please let me know!