MY PLEA TO YOU > There is a tendency for companies to “vote-up” sessions from their own employees. I understand the need for the practice BUT encourage you to make time to review other sessions too. Affiliation voting is fine, robot voting is not.
If you are interested topics that I discuss on this blog, here’s a list of sessions I’m involved in:
I’ve come to accept that the “Hallway Track” is my primary session at OpenStack events. I want to thank the many people in the community who make that the best track. It’s not only full of deep technical content; there are also healthy doses of intrigue, politics and “let’s fix that” in the halls.
I think honest reflection is critical to OpenStack growth (reflections from last year). My role as a Board member must not translate into pom-pom waving robot cheerleader.
What I heard that’s working:
Foundation event team did a great job on the logistics and many appreciate the user and operator focus. There’s is no doubt that OpenStack is being deployed at scale and helping transform cloud infrastructure. I think that’s a great message.
DefCorecriteria were approved by the Board. The overall process and impact was talked about positively at the summit. To accelerate, we need +1s and feedback because “crickets” means we need to go slower. I’ll have to dedicate a future post to next steps and “designated sections.”
Marketplace! Great turn out by vendors of all types, but I’m not hearing about them making a lot of money from OpenStack (which is needed for them to survive). I like the diversity of the marketplace: consulting, aaServices, installers, networking, more networking, new distros, and ecosystem tools.
There’s some real growth in aaS services for openstack (database, load balancer, dns, etc). This is the ecosystem that many want OpenStack to drive because it helps displace Amazon cloud. I also heard concerns that to be sure they are pluggable so companies can complete on implementation.
Lots of process changes to adapt to growing pains. People felt that the community is adapting (yeah!) but were concerned having to re-invent tooling (meh).
There are also challenges that people brought to me:
Our #1 danger is drama. Users and operators want collaboration and friendly competition. They are turned off by vendor conflict or strong-arming in the community (e.g.: the WSJ Red Hat article and fallout). I’d encourage everyone to breathe more and react less.
Lack of product management is risking a tragedy of the commons. Helping companies work together and across projects is needed for our collaboration processes to work. I’ll be exploring this with Sean Roberts in future posts.
Making sure there’s profit being generated from shared code. We need to remember that most of the development is corporate funded so we need to make sure that companies generate revenue. The trend of everyone creating unique distros may indicate a problem.
We need to be more operator friendly. I know we’re trying but we create distance with operators when we insist on creating new tools instead of using the existing ecosystem. That also slows down dealing with upgrades, resilient architecture and other operational concerns.
Anointed projects concerns have expanded since Hong Kong. There’s a perception that Heat (orchestration), Triple0 (provisioning), Solum (platform) are considered THE only way OpenStack solves those problems and other approaches are not welcome. While that encourages collaboration, it also chills competition and discussion.
There’s a lot of whispering about the status of challenged projects: neutron (works with proprietary backends but not open, may not stay integrated) and openstack boot-strap (state of TripleO/Ironic/Heat mix). The issue here is NOT if they are challenged but finding ways to discuss concerns openly (see anointed projects concern).
I’d enjoy hearing more about success and deeper discussion around concerns. I use community feedback to influence my work in the community and on the board. If you think I’ve got it right or wrong then please let me know.
I could not be happier with the results Crowbar collaborators and my team at Dell achieved around the 1st Crowbar design summit. We had great discussions and even better participation.
The attendees represented major operating system vendors, configuration management companies, OpenStack hosting companies, OpenStack cloud software providers, OpenStack consultants, OpenStack private cloud users, and (of course) a major infrastructure provider. That’s a very complete cross-section of the cloud community.
I knew from the start that we had too little time and, thankfully, people were tolerant of my need to stop the discussions. In the end, we were able to cover all the planned topics. This was important because all these features are interlocked so discussions were iterative. I was impressed with the level of knowledge at the table and it drove deep discussion. Even so, there are still parts of Crowbar that are confusing (networking, late binding, orchestration, chef coupling) even to collaborators.
In typing up these notes, it becomes even more blindingly obvious that the core features for Crowbar 2 are highly interconnected. That’s no surprise technically; however, it will make the notes harder to follow because of knowledge bootstrapping. You need take time and grok the gestalt and surf the zeitgeist.
Collaboration Invitation: I wanted to remind readers that this summit was just the kick-off for a series of open weekly design (Tuesdays 10am CDT) and coordination (Thursdays 8am CDT) meetings. Everyone is welcome to join in those meetings – information is posted, recorded, folded, spindled and mutilated on the Crowbar 2 wiki page.
These notes are my reflection of the online etherpad notes that were made live during the meeting. I’ve grouped them by design topic.
We are refactoring Crowbar at this time because we have a collection of interconnected features that could not be decoupled
Some items (Database use, Rails3, documentation, process) are not for debate. They are core needs but require little design.
There are 5 key topics for the refactor: online mode, networking flexibility, OpenStack pull from source, heterogeneous/multi operating systems, being CDMB agnostic
Due to time limits, we have to stop discussions and continue them online.
We are hoping to align Crowbar 2 beta and OpenStack Folsom release.
Online / Connected Mode
Online mode is more than simply internet connectivity. It is the foundation of how Crowbar stages dependencies and components for deploy. It’s required for heterogeneous O/S, pull from source and it has dependencies on how we model networking so nodes can access resources.
We are thinking to use caching proxies to stage resources. This would allow isolated production environments and preserves the run everything from ISO without a connection (that is still a key requirement to us).
Suse’s Crowbar fork does not build an ISO, instead it relies on RPM packages for barclamps and their dependencies.
Pulling packages directly from the Internet has proven to be unreliable, this method cannot rely on that alone.
Install From Source
This feature is mainly focused on OpenStack, it could be applied more generally. The principals that we are looking at could be applied to any application were the source code is changing quickly (all of them?!). Hadoop is an obvious second candidate.
We spent some time reviewing the use-cases for this feature. While this appears to be very dev and pre-release focused, there are important applications for production. Specifically, we expect that scale customers will need to run ahead of or slightly adjacent to trunk due to patches or proprietary code. In both cases, it is important that users can deploy from their repository.
We discussed briefly our objective to pull configuration from upstream (not just OpenStack, but potentially any common cookbooks/modules). This topic is central to the CMDB agnostic discussion below.
The overall sentiment is that this could be a very powerful capability if we can manage to make it work. There is a substantial challenge in tracking dependencies – current RPMs and Debs do a good job of this and other configuration steps beyond just the bits. Replicating that functionality is the real obstacle.
CMDB agnostic (decoupling Chef)
This feature is confusing because we are not eliminating the need for a configuration management database (CMDB) tool like Chef, instead we are decoupling Crowbar from the a single CMDB to a pluggable model using an abstraction layer.
It was stressed that Crowbar does orchestration – we do not rely on convergence over multiple passes to get the configuration correct.
We had strong agreement that the modules should not be tightly coupled but did need a consistent way (API? Consistent namespace? Pixie dust?) to share data between each other. Our priority is to maintain loose coupling and follow integration by convention and best practices rather than rigid structures.
The abstraction layer needs to have both import and export functions
Crowbar will use attribute injection so that Cookbooks can leverage Crowbar but will not require Crowbar to operate. Crowbar’s database will provide the links between the nodes instead of having to wedge it into the CMDB.
In 1.x, the networking was the most coupled into Chef. This is a major part of the refactor and modeling for Crowbar’s database.
There are a lot of notes captured about this on the etherpad – I recommend reviewing them
Heterogeneous OS (bare metal provisioning and beyond)
This topic was the most divergent of all our topics because most of the participants were using some variant of their own bare metal provisioning project (check the etherpad for the list).
Since we can’t pack an unlimited set of stuff on the ISO, this feature requires online mode.
Most of these projects do nothing beyond OS provisioning; however, their simplicity is beneficial. Crowbar needs to consider users who just want a stream-lined OS provisioning experience.
First, we’ll have our Crowbar demo rack showcasing LIVE MULTI-NODE DIABLO DEPLOYMENTS and some IMPORTANT FEATURE AND COMMUNITY ADDITIONS. No spoilers here – you’ll have to come by. Of course, it’s in the git hub too, but we’ve put a bow on it.
Second, there’s a DEPLOYMENT BLUE PRINT discussion about getting better interlocks between OpenStack development and deployment. We really need to reduce the pain and lag between adding great features and using those features.
Next, we’ve got a limited audience CONCEPT SNEAK PEEK for something from our labs that we think is very interesting and we’d like to get input about. Unfortunately, we’re very limited with space & time for this whisper session so you’ll need to contact OpenStack@Dell.com to request an invitation.
Finally, at the Conference, you can see OUR TEAM IN ACTION:
Thurs 11:30 – Dell Keynote by John Igoe
Thurs 3:30 – Private Cloud Panel w/ Rob Hirschfeld