Last week, my team at Dell led a world-wide OpenStack Essex Deploy event. Kamesh Pemmaraju, our OpenStack-powered solution product manager, did a great summary of the event results (200+ attendees!). What started as a hack-a-thon for deploy scripts morphed into a stunning 14+ hour event with rotating intro content and an ecosystem showcase (videos). Special kudos to Kamesh, Andi Abes, Judd Maltin, Randy Perryman & Mike Pittaro for leadership at our regional sites.
Clearly, OpenStack is attracting a lot of interest. We’ve been investing time in content to help people who are curious about OpenStack to get started.
On that measure, we have room for improvement. We had some great discussions about how to handle upgrades and market drivers for OpenStack; however, we did not spend the time improving Essex deployments that I was hoping to achieve. I know it’s possible – I’ve talked with developers in the Crowbar community who want this.
If you wanted more expert interaction, here are some of my thoughts for future events.
Expert track did not get to deploy coding. I think that we need to simply focus more even tightly on to Crowbar deployments. That means having a Crowbar Hack with an OpenStack focus instead of vice versa.
Efforts to serve OpenStack n00bs did not protect time for experts. If we offer expert sessions then we won’t try to have parallel intro sessions. We’ll simply have to direct novices to the homework pages and videos.
Combining on-site and on-line is too confusing. As much as I enjoy meeting people face-to-face, I think we’d have a more skilled audience if we kept it online only.
Connectivity! Dropped connections, sigh.
Better planning for videos (not by the presenters) to make sure that we have good results on the expert track.
This event was too long. It’s just not practical to serve Europe, US and Asia in a single event. I think that 2-3 hours is a much more practical maximum. 10-12am Eastern or 6-8pm Pacific would be much more manageable.
Do you have other comments and suggestions? Please let me know!
To get the meeting started, Marc Padovani from HP (this month’s sponsor) provided some lessons learned from the HP OpenStack-Powered Cloud. While Marc noted that HP has not been able to share much of their development work on OpenStack; he was able to show performance metrics relating to a fix that HP contributed back to the OpenStack community. The defect related to the scheduler’s ability to handle load. The pre-fix data showed a climb and then a gap where the scheduler simply stopped responding. Post-fix, the performance curve is flat without any “dead zones.” (sharing data like this is what I call “open operations“)
The meat of the meetup was a freeform discussion about what the group would like to see discussed at the Design Summit. My objective for the discussion was that the Austin OpenStack community could have a broader voice is we showed consensus for certain topics in advance of the meeting.
At Jim Plamondon‘s suggestion, we captured our brain storming on the OpenStack etherpad. The Etherpad is super cool – it allows simultaneous editing by multiple parties, so the notes below were crowd sourced during the meeting as we discussed topics that we’d like to see highlighted at the conference. The etherpad preserves editors, but I removed the highlights for clarity.
Imagine the late end-game: can Azure/VMWare adopt OPenStack’s APIs and data formats to deliver interop, without running OpenStack’s code? Is this good? Are there conversations on displacing incumbents and spurring new adoption?
Dev docs vs user docs
Lag of update/fragmentation (10 blogs, 10 different methods, 2 “work”)
Per release getting started guide validated and available prior or at release.
Error messages and codes vs python stack traces
Alternatively put, “how can we make error messages more ops-friendly, without making them less developer-friendly?”
Upgrade and operations of rolling updates and upgrades. Hot migrations?
If OpenStack was installable on Windows/Hyper-V as a simple MSI/Service installer – would you try it as a node?
Is Nova too big? How does it get fixed?
make it smaller sub-projects
shorter release cycles?
volume split out?
volume expansion of backend storage systems
Is nova-volume the canonical control plane for storage provisioning? Regardless of transport? It presently deals in block devices only… is the following blueprint correctly targeted to nova-volume?
What is a contribution that warrants an invitation
Look at Launchpad’s Karma system, which confers karma for many different “contributory” acts, including bug fixes and doc fixes, in addition to code commitments
Is there a time for an operations summit?
How about an operators’ track?
Just a note: forums.openstack.org for users/operators to drive/show need and participation.
How can we capture the implicit knowledge (of mailing list and IRC content) in explicit content (documentation, forums, wiki, stackexchange, etc.)?
Hypervisors: room for discussion?
Do we want hypervisor featrure parity?
From the cloud-app developer’s perspective, I want to “write once, run anywhere,” and if hypervisor features preclude that (by having incompatible VM images, foe example)
(RobH: But “write once, run anywhere” [WORA] didn’t work for Java, right?)
(JimP: Yeah, but I was one of Microsoft’s anti-Java evangelists, when we were actively preventing it from working — so I know the dirty tricks vendors can use to hurt WORA in OpenStack, and how to prevent those trick from working.)
Swift API is an evolving de facto open alternative to S3… CDMI is SNIA standards track. Should Swift API become CDMI compliant? Should CDMI exist as a shim… a la the S3 stuff.