OpenStack Quantum Update – what I got wrong and where it’s headed

I’m glad to acknowledge that I incorrectly reported the OpenStack Quantum project would require licensed components for implementation!   I fully stand behind Quantum as being OpenStack’s “killer app” and am happy to post more information about it here.

Side note: My team at Dell is starting to get Crowbar community pings about collaboration on a Quantum barclamp.  Yes, we are interested!

This updates comes via Dan Wendlandt from Nicira who pointed out my error in the Seattle Meetup notes (you can read his comment on that post).  Rather than summarize his information, I’ll let Dan talk for himself…

Dan’s comments about open source Quantum implementation:

There’s a full documentation on how to use Open vSwitch to implement Quantum (see http://docs.openstack.org/incubation/openstack-network/admin/content/ and http://openvswitch.org/openstack/documentation/), and [Dan] even sent a demo link out to the openstack list a while back (http://wiki.openstack.org/QuantumOVSDemo). Open vSwitch is completely open source and free. Some other plugins may require proprietary hardware and/or software, but there is definitely a (very) viable and completely open source option for Quantum networking.

Dan’s comments about Quantum OpenStack project status in the D-E-F release train:

At the end of the Diablo cycle, Quantum applied to become an incubated project, which means it will be incubated for Essex. At the end of the Essex cycle, we plan to apply to be a core project, meaning that if we are accepted, we would be a core project for the F-series release.

Its worth noting, however, that [Dan] knows of many people planning on putting Quantum in production before then, which is the real indicator of a project’s maturity (regardless of whether it is technically “core” or not).

OpenStack Day 2 Aspiration: Dreaming & Breathing

Between partnering meetings, I bounced through biz and tech sessions during Day 2 of the OpenStack conference (day 1 notes).   After my impression summary, I’m including some succinct impressions, pictures, and copies of presentations by my Dell team-mates Greg Althaus & Brent Douglas.

Clouds on the road to Bexar
My overwhelming impression is a healthy tension between aspirational* and practical discussions.  The community appetite for big broad and bodacious features is understandably high: cloud seems on track as a solution for IT problems but there are is still an impedance mismatch between current apps and cloud capabilities.
As service providers ASPire to address these issues, some OpenStack blue print discussions tended to digress towards more forward-looking or long-term designs.  However, watching the crowd, there was also a quietly heads down and pragmatic audience ready to act and implement.  For this action focused group, delivering working a cloud was the top priority.  The Rackers and Nebuliziers have product to deploy and will not be distracted from the immediate concerns of living, breathing shippable code.
I find the tension between dreaming aspiration (cloud futures) and breathing aspiration (cloud delivery) necessary to the vitality of OpenStack.
[Day 3 update, these coders are holding the floor.  People who are coding have moved into the front seats of the fishbowl and the process is working very nicely.]
Specific Comments (sorry, not linking everything):
  • Cloud networking is a mess and there is substantial opportunity for innovation here.  Nicira was making an impression talking about how Open vSwitch and OpenFlow could address this at the edge switches.  interesting,  but messy.
  • I was happy with our (Dell’s) presentations: real clouds today (Bexas111010DataCenterChanges) and what to deploy on (Bexar111010OpenStackOnDCS).
  • SheepDog was presented as a way to handle block storage.  Not an iSCSI solution, works directly w/ KVM.  Strikes me as too limiting – I’d rather see just using iSCSI.  We talked about GlusterFS or Ceph (NewDream).  This area needs a lot of work to catch up with Amazon EBS.  Unfortunately, persisting data on VM “local” disks is still the dominate paradigm.
  • Discussions about how to scale drifted towards aspirational.
  • Scalr did a side presentation about automating failover.
  • Discussion about migration from Eucalyptus to OpenStack got side tracked with aspirations for a “hot” migration.  Ultimately, the differences between network was a problem.  The practical issue is discovering the meta data – host info not entirely available from the API.
  • Talked about an API for cloud networking.  This blue print was heavily attended and messy.  The possible network topologies present too many challenges to describe easily.  Fundamentally, there seems consensus that the API should have a very very simple concept of connecting VM end points to a logical segment.  That approach leverages the accepted (but out dated) VLAN semantic, but implementation will have to be topology aware.  ouch!
  • Day 3 topic Live migration: Big crowd arguing with bated breath about this.  The summary “show us how to do it without shared storage THEN we’ll talk about the API.”
Executive Tweet:  #OpenStack getting to down business.  Big dreams.  Real problems.  Delivering Code.
 
Note: I nominate Aspirational for 2010 buzzword of the year.

Greg PresentingBig Crowd on Day 1