To get the meeting started, Marc Padovani from HP (this month’s sponsor) provided some lessons learned from the HP OpenStack-Powered Cloud. While Marc noted that HP has not been able to share much of their development work on OpenStack; he was able to show performance metrics relating to a fix that HP contributed back to the OpenStack community. The defect related to the scheduler’s ability to handle load. The pre-fix data showed a climb and then a gap where the scheduler simply stopped responding. Post-fix, the performance curve is flat without any “dead zones.” (sharing data like this is what I call “open operations“)
The meat of the meetup was a freeform discussion about what the group would like to see discussed at the Design Summit. My objective for the discussion was that the Austin OpenStack community could have a broader voice is we showed consensus for certain topics in advance of the meeting.
At Jim Plamondon‘s suggestion, we captured our brain storming on the OpenStack etherpad. The Etherpad is super cool – it allows simultaneous editing by multiple parties, so the notes below were crowd sourced during the meeting as we discussed topics that we’d like to see highlighted at the conference. The etherpad preserves editors, but I removed the highlights for clarity.
Imagine the late end-game: can Azure/VMWare adopt OPenStack’s APIs and data formats to deliver interop, without running OpenStack’s code? Is this good? Are there conversations on displacing incumbents and spurring new adoption?
Dev docs vs user docs
Lag of update/fragmentation (10 blogs, 10 different methods, 2 “work”)
Per release getting started guide validated and available prior or at release.
Error messages and codes vs python stack traces
Alternatively put, “how can we make error messages more ops-friendly, without making them less developer-friendly?”
Upgrade and operations of rolling updates and upgrades. Hot migrations?
If OpenStack was installable on Windows/Hyper-V as a simple MSI/Service installer – would you try it as a node?
Is Nova too big? How does it get fixed?
make it smaller sub-projects
shorter release cycles?
volume split out?
volume expansion of backend storage systems
Is nova-volume the canonical control plane for storage provisioning? Regardless of transport? It presently deals in block devices only… is the following blueprint correctly targeted to nova-volume?
What is a contribution that warrants an invitation
Look at Launchpad’s Karma system, which confers karma for many different “contributory” acts, including bug fixes and doc fixes, in addition to code commitments
Is there a time for an operations summit?
How about an operators’ track?
Just a note: forums.openstack.org for users/operators to drive/show need and participation.
How can we capture the implicit knowledge (of mailing list and IRC content) in explicit content (documentation, forums, wiki, stackexchange, etc.)?
Hypervisors: room for discussion?
Do we want hypervisor featrure parity?
From the cloud-app developer’s perspective, I want to “write once, run anywhere,” and if hypervisor features preclude that (by having incompatible VM images, foe example)
(RobH: But “write once, run anywhere” [WORA] didn’t work for Java, right?)
(JimP: Yeah, but I was one of Microsoft’s anti-Java evangelists, when we were actively preventing it from working — so I know the dirty tricks vendors can use to hurt WORA in OpenStack, and how to prevent those trick from working.)
Swift API is an evolving de facto open alternative to S3… CDMI is SNIA standards track. Should Swift API become CDMI compliant? Should CDMI exist as a shim… a la the S3 stuff.
Tomorrow (3/1), numerous sites are gathering around a World Wide Essex Hack Day on 3/1. If you want to participate or even host a hack venue, get on the list and IRC channel (details).
My team at Dell is organizing a community a follow-upOpenStack Essex Install Day next week (3/8) in both Austin and Boston. Just like the Hack Day, the install fest will focus on Essex release code with both online and local presence. Unlike the Hack Day, our focus will be on deployments. For the Dell team, that means working on the Essex deployment for Crowbar. We’re still working on a schedule and partner list so stay tuned. I’m trying to webcast Crowbar & OpenStack training sessions during the install day.
Dell is hosting one of the five-day sessions at our Austin campus (register) starting on October 24th. Other sessions are in Boston (9/26) and London (10/10).
If you come to the Austin session, I can guarantee you’ll get to meet some of our Austin team (Rob, Joseph, Greg, Victor, AD, Nick and Joey). I’ll try to setup a visit to the Boston sessions by some of our Nashua NH members (Dan, Scott, Andi, Randy, Audra and Paul).
Do you want to have a winning team? Bring a spoonful of zietgiest to your next meeting!
For me, Zietgiest is about how group dynamics influence how we feel about technology and make decisions. It’s like meme, but I like Zeitgiest more because of its lower vowel ratio (Hirschfeld = 2:10).
Yesterday, my release team meeting had negative Zeitgiest. Locally everyone was checking email while the remote speaker flipped through dense powerpoint slides. It was like watching my divorced aunt’s family vacation slides via Webex. We needed a spoonful of zietgiest! That’s how I found myself explaining some of our challenges with the phrase “turd in the punchbowl” and getting people paying more attention to the real work. A small positive spark and faked enthusiasm changed the momentum. Yeah, it was fake at first and then become real zeitgiest when the other attendees picked up on the positive vibe.
The idea of seeding zietgiest is critical for everyone on teams. It’s like the William James expression,feeling follows action: if you act happy then you’ll shake off the blues and start to feel happy. Yes, this is 100% real. The same applies for groups. We can choose to ride or steer the zietgiests.
There’s no reason to endure low energy meetings when you can get out your spoon and stir things up.