With the next OpenStack Austin meetup on Thursday (sponsored by Puppet), I felt like it was past time for me to post my thoughts and observations about the Spring 2012 OpenStack design conference. This was my fifth OpenStack conference (my notes about Bexar, Cactus, Diablo & Essex). Every conference has been unique, exciting, and bigger than the previous.
1. Technology Trend: Practical with Potential.
OpenStack started with a BIG vision to become the common platform for cloud API and operations. That vision is very much alive and on-track; however, our enthusiasm for what could be is tempered by the need to build a rock solid foundation. The drive to stability over feature expansion has had a very positive impact. I give a lot of credit for this effort to the leadership of the project technical leads (PTLs), Canonical‘s drive to include OpenStack in the 12.04 LTS and the Rackspace Cloud drive to deploy Essex. My team at Dell has also been part of this trend by focusing so much effort on making OpenStack production deployable (via Crowbar).
Overall, I am seeing a broad-based drive to minimize disruption.
2. Culture Trend: Friendly but some tension.
Companies at both large and small ends of the spectrum are clearly jockeying for position. I think the market is big enough for everyone; however, we are also bumping into each other. Overall, we are putting aside these real and imagined differences to focus on enlarging the opportunity of having a true community cloud platform. For example, the OpenStack Foundation investment formation has moneyed competitors jostling for position to partner together.
However, it’s not just about paying into the club; OpenStack’s history is clearly about execution. Looking back to the original Austin Summit sponsors, we’ve clearly seen that intent and commitment are different.
3. Discussion Trend: Small Groups Effective
The depth & quality of discussions inside sessions was highly variable. Generally, I saw that large group discussions stayed at a very high level. The smaller sessions required deep knowledge of the code to participate and seemed more productive. We continue to have a juggle between discussions that are conceptual or require detailed knowledge of the code. If conceptual, it’s too far removed. If code, it becomes inaccessible to many people.
This has happened at each Summit and I now accept that it is natural. We are using vision sessions to ensure consensus and working sessions to coordinate deliverables for the release.
I cannot over emphasize importance of small groups and delivery driven execution interactions: I spent most of my time in small group discussions with partners aligning efforts.
4. Deployment Trend: Testing and Upstreams matter
Operations for deploying OpenStack is a substantial topic at the Summit. I find that to be a significant benefit to the community because there are a large block of us who were vocal advocates for deployability at the very formation of the project.
From my perspective at Dell, we are proud to see that wide spread acknowledgement of our open source contribution, Crowbar, as the most prominent OpenStack deployer. Our efforts at making OpenStack installable are recognized as a contribution; however, we’re also getting feedback that we need to streamline and simplify Crowbar. We also surprised to hear that Crowbar is “opinionated.” On reflection, I agree (and am proud) of this assessment because it matches best practice coding styles. Since our opinions also drive our test matrix there is a significant value for our OpenStack deployment is that we spend a lot of time testing (automated and manual) our preferred install process.
There’s a push to reconcile the various Chef OpenStack cookbooks into a single upstream. This seems like a very good idea because it will allow various parties to collaborate on open operations. The community needs leadership from Opscode to make this happen. It appears that Puppet Labs is interested in playing a similar role for Puppet modules but these are still emerging and have not had a chance to fragment.
No matter which path we take, the deployment scripts are only as good as their level of testing. Unreliable deployment scripts have are less than worthless.