Between partnering meetings, I bounced through biz and tech sessions during Day 2 of the OpenStack conference (day 1 notes). After my impression summary, I’m including some succinct impressions, pictures, and copies of presentations by my Dell team-mates Greg Althaus & Brent Douglas.
-
Cloud networking is a mess and there is substantial opportunity for innovation here. Nicira was making an impression talking about how Open vSwitch and OpenFlow could address this at the edge switches. interesting, but messy.
-
I was happy with our (Dell’s) presentations: real clouds today (Bexas111010DataCenterChanges) and what to deploy on (Bexar111010OpenStackOnDCS).
-
SheepDog was presented as a way to handle block storage. Not an iSCSI solution, works directly w/ KVM. Strikes me as too limiting – I’d rather see just using iSCSI. We talked about GlusterFS or Ceph (NewDream). This area needs a lot of work to catch up with Amazon EBS. Unfortunately, persisting data on VM “local” disks is still the dominate paradigm.
-
Discussions about how to scale drifted towards aspirational.
-
Scalr did a side presentation about automating failover.
-
Discussion about migration from Eucalyptus to OpenStack got side tracked with aspirations for a “hot” migration. Ultimately, the differences between network was a problem. The practical issue is discovering the meta data – host info not entirely available from the API.
-
Talked about an API for cloud networking. This blue print was heavily attended and messy. The possible network topologies present too many challenges to describe easily. Fundamentally, there seems consensus that the API should have a very very simple concept of connecting VM end points to a logical segment. That approach leverages the accepted (but out dated) VLAN semantic, but implementation will have to be topology aware. ouch!
-
Day 3 topic Live migration: Big crowd arguing with bated breath about this. The summary “show us how to do it without shared storage THEN we’ll talk about the API.”