Between partnering meetings, I bounced through biz and tech sessions during Day 2 of the OpenStack conference (day 1 notes). After my impression summary, I’m including some succinct impressions, pictures, and copies of presentations by my Dell team-mates Greg Althaus & Brent Douglas.
Clouds on the road to Bexar
My overwhelming impression is a healthy tension between aspirational* and practical discussions. The community appetite for big broad and bodacious features is understandably high: cloud seems on track as a solution for IT problems but there are is still an impedance mismatch between current apps and cloud capabilities.
As service providers ASPire to address these issues, some OpenStack blue print discussions tended to digress towards more forward-looking or long-term designs. However, watching the crowd, there was also a quietly heads down and pragmatic audience ready to act and implement. For this action focused group, delivering working a cloud was the top priority. The Rackers and Nebuliziers have product to deploy and will not be distracted from the immediate concerns of living, breathing shippable code.
I find the tension between dreaming aspiration (cloud futures) and breathing aspiration (cloud delivery) necessary to the vitality of OpenStack.
[Day 3 update, these coders are holding the floor. People who are coding have moved into the front seats of the fishbowl and the process is working very nicely.]
Specific Comments (sorry, not linking everything):
Cloud networking is a mess and there is substantial opportunity for innovation here. Nicira was making an impression talking about how Open vSwitch and OpenFlow could address this at the edge switches. interesting, but messy.
SheepDog was presented as a way to handle block storage. Not an iSCSI solution, works directly w/ KVM. Strikes me as too limiting – I’d rather see just using iSCSI. We talked about GlusterFS or Ceph (NewDream). This area needs a lot of work to catch up with Amazon EBS. Unfortunately, persisting data on VM “local” disks is still the dominate paradigm.
Discussions about how to scale drifted towards aspirational.
Scalr did a side presentation about automating failover.
Discussion about migration from Eucalyptus to OpenStack got side tracked with aspirations for a “hot” migration. Ultimately, the differences between network was a problem. The practical issue is discovering the meta data – host info not entirely available from the API.
Talked about an API for cloud networking. This blue print was heavily attended and messy. The possible network topologies present too many challenges to describe easily. Fundamentally, there seems consensus that the API should have a very very simple concept of connecting VM end points to a logical segment. That approach leverages the accepted (but out dated) VLAN semantic, but implementation will have to be topology aware. ouch!
Day 3 topic Live migration: Big crowd arguing with bated breath about this. The summary “show us how to do it without shared storage THEN we’ll talk about the API.”
It’s obvious looking at the board composition that RackSpace and NASA Nova are driving most of the development; however, the is palpable community interest and enthusiasm. Participants and contributors showed up in force at this event.
RackSpace and NASA leadership provides critical momentum for the community. Code is the smallest part of their contribution, their commitment to run the code at scale in production is the magic rocket fuel powering OpenStack. I’ve had many conversations with partners and prospects planning to follow RackSpace into production with a 3-6 month lag.
Beyond that primary conference arc, my impressions:
Core vendors like Citrix, Dell, Canonical are signing up to do primary work for the code base. They are taking ownership for their own components in the stack.
Universally, people comment about the speed of progress and amount of code being generated. Did I mention that there is a lot of code being written.
Networking is still a major challenge. OpenStack (with Citrix’s Xen support) is driving Open vSwitchas a replacement for iptables management.
IPv6 gets lackadaisical treatment in the US, but is urgent in Japan/Asia where their core infrastructure is ALREADY IPv6. Their frustration to get attention here should be a canary in the cloud mine (but is not). They proposed a gateway model where VMs have dual addresses: IPv4 gets NATed while IPv6 is a pass-through. Seems to me that the going IPv6 internal is the real solution.
Cloud bursting is still too fuzzy a thing to talk about in a big group. The session about it covered so many use-cases that we did not accomplish anything. Some people wanted to talk about cloud API proxy while others (myself included) wanted to talk about managing apps between clouds. My $0.02 is that vendors like RightScale solve the API proxy issue so it’s the networking issues that need focus. We need to get back to the use-cases!
Executive Tweet: #openstack: Partners & Code = great progress. Networking = needs more love