OpenStack Boston Meetup 2/1 covers Quantum & Foundation

My team at Dell was in Beantown (several of us are Nashua based) for an annual team meeting so the timing for this Boston meetup.  Special thanks to Andi Abes for organizing and Suse for Sponsoring!!

We covered two primary topics: Quantum and the OpenStack Foundation.

In typing up my notes from the sessions, I ended up with so much information that it made more sense to break them into independent blog posts. Wow – that’s a lot of value from a free meetup!eetup was ideal for us. While we showed up in force, so did many other Stackers including people from HP, Nicira, Suse, Havard, Voxel, RedHat, ESPN and many more! The turnout for the event was great and I’m taking notes that Austin may need to upgrade our pizza and Boston may need to upgrade their cookies (just sayin’).

The Quantum session by David Lapsley from Nicira talked about the architecture and applications of Quantum. I think that Quantum is an exciting incubated project for OpenStack; however, it is important to remember that Essex stands alone without it. I believe this fact gets forgotten in enthusiasm over Quantum’s shiny potential.

The OpenStack session by Rob Hirschfeld from Dell (me!) talked about the importance of governance for OpenStack and how the Foundation will play a key role in transitioning it from Rackspace to a neutral party. There are many feel-good community benefits that the Foundation brings; however, the collaborators’ ROI is driver for creating a strong foundation. There is nothing wrong with acknowledging that fact and using it to create a more sustainable OpenStack.

Why Governance Matters in Open Source: Discussing the OpenStack Foundation

This post is part of my notes from the 2/1 Boston OpenStack meetup.

OpenStack Foundation

Your’s truly (Rob Hirschfeld) gave the presentation about the OpenStack Foundation.  To readers of this blog, it’s obvious that I’m a believer in the OpenStack mission; however, it’s not obvious how creating a foundation helps with that mission and why OpenStack needs its own. As one person at the meetup put it, “Why not? Every major project needs a foundation!”

Governance does not sound sexy compared to writing code and deploying clouds, but it’s very important to the success of the project.

Here are my notes without the poetic elocution I exuded during the meetup…

The basics:

  • What: Creating a neutral body to govern OpenStack. Rackspace has been leading OpenStack. This means that they own the copyrights, name and also pay the people who organize the community. They committed (to executives at Dell and others) that they would ultimately setup a standalone body to govern the project before the project was public and endorsed by those early partners. Dell (my employer), Citrix, Accenture and NASA were some of biggest names at the Austin conference launch.
  • Why: A neutral body is needed because a lot of companies are committing significant time and money to the project. They cannot risk their investments on Rackspace good will alone. This may mean many things. It could be they don’t like Rackspace direction or they feel that Rackspace is not investing enough.
  • When: Right now and over the next few releases.  You should give feedback right now on the OpenStack Foundations mission.  The actual foundation will take more time to establish because it requires legal work and funding commitments.
  • Who: The community – all stakeholders. This is important stuff! While trying to standup a financially independent Foundation, which requires moneys, the little guys are not left out. There is a clear realization and desire to enable independent developers and contributors and small players to have a seat at the table.
  • How Much: The amounts are unclear, but establishing a foundation will require a significant ongoing investment from highly involved and moneyed parties (Rackspace, Dell, Cisco, HP, Citrix, NTT, startups?, etc).  The funding will pay salaries for people dedicated to the community doing the things that I’ll discuss below.  Overall, the ROI for those investments must be clear!

The foundation does “governance.” But, what does that mean? Here is a list of vitally important work that the foundation is responsible for.

  • Branding – Protecting, certifying, and promoting the OpenStack brand is important because it ensures that “OpenStack” has a valuable and predictable meaning to contributors and users. A strong the brand also means a stronger temptation for people to abuse the brand by claiming compatibility, participation and integration.
  • API – Many would assume that the OpenStack API is the very heart of the project and there is merit to this position. As more and more OpenStack implementations emerge, it is essential that we have a body that can certify which implementations (and even which versions of the implementation!) are valid. This is a substantial value to the community because API integrity ensures project continuity and helps the ecosystem monetize the project. Note: my opinion differs from others here because I think we should favor API over implementation
  • Community – The OpenStack community is not an accident. It is the function of deliberate actions and choices made by Rackspace and supported by key contributors. That community requires virtual and physical places to coalesce and leaders to organize and manage those meeting places. The excellent conferences, wikis, blogs, media awareness, documentation and meetups are a product of consistent community management.
  • Arbitration – An open source community is a family and siblings do not always get along. Today, Rackspace must be very careful about balancing their own interests because they are like the oldest sibling playing the parent role – you can get away with it until something serious happens. We need a neutral party so that Rackspace can protect their own interests (alternate spin: because Rackspace protects their own interests at the expense of the community).
  • Leadership – OpenStack today is a collection of projects with individual leadership. We will increasingly need coordinated leadership as the number of projects and users increases. Centralized leadership is essential because the good of the project as a whole may mean sacrifices within individual projects. It may even mean that some projects chose to leave the OpenStack tent. Stewarding these challenges will require a new level of leadership.
  • Legal – This is a function of all the above but also something more. From a legal stand point, OpenStack be able to represent itself. There is a significant amount of intellectual property being created. It would be foolish to overlook that this property is valuable and needs adequate legal representation.

I used “vitally important” to describe the above items. Is that an exaggeration? Our goal is collaboration and that requires some infrastructure and rules to make it sustainable. We must have a foundation that encourages innovation (multiple implementations) and collaboration (discourages forking). Innovation and collaboration are the heartbeat of an open source project.

The foundation is vitally important because collaboration by competitors is fragile.

In addition to the core areas above, the foundation needs to handle routine tactical items such as:

  • Delivering on milestones & releases
  • Moving new subprojects into OpenStack
  • Electing and maintaining Project Policy Board
  • Electing and maintaining Project Technical Leads
  • Ensuring adherence and extensions to the current bylaws

At the end of the day, OpenStack monetization is the central value for the Foundation.

In order for the OpenStack project, and thus its foundation, to flourish, the contributors, ecosystem, sponsors and users of the project must be able to see a reasonable return (ROI) on their investment. I would love to believe that the foundation is allow about people banding together to solve important problems for the benefit of all; however, it is more realistic to embrace that we can both collaborate and profit simultaneously. Acknowledging the pragmatic self-interested view allows us to create the right incentives and processes as embodied by the OpenStack foundation.

OpenStack Deployments Abound at Austin Meetup (12/9)

I was very impressed by the quality of discussion at the Deployment topic meeting for Austin OpenStack Meetup (#OSATX). Of the 45ish people attending, we had representations for at least 6 different OpenStack deployments (Dell, HP, ATT, Rackspace Internal, Rackspace Cloud Builders, Opscode Chef)!  Considering the scope of those deployments (several are aiming at 1000+ nodes), that’s a truly impressive accomplishment for such a young project.

Even with the depth of the discussion (notes below), we did not go into details on how individual OpenStack components are connected together.  The image my team at Dell uses is included below.  I also recommend reviewing Rackspace’s published reference architecture.

Figure 1 Diablo Software Architecture. Source Dell/OpenStack (cc w/ attribution)

Notes

Our deployment discussion was a round table so it is difficult to link statements back to individuals, but I was able to track companies (mostly).

  • HP
    • picked Ubuntu & KVM because they were the most vetted. They are also using Chef for deployment.
    • running Diablo 2, moving to Diablo Final & a flat network model. The network controller is a bottleneck. Their biggest scale issue is RabbitMQ.
    • is creating their own Nova Volume plugin for their block storage.
    • At this point, scale limits are due to simultaneous loading rather than total number of nodes.
    • The Nova node image cache can get corrupted without any notification or way to force a refresh – this defect is being addressed in Essex.
    • has setup availability zones are completely independent (500 node) systems. Expecting to converge them in the future.
  • Rackspace
    • is using the latest Ubuntu. Always stays current.
    • using Puppet to setup their cloud.
    • They are expecting to go live on Essex and are keeping their deployment on the Essex trunk. This is causing some extra work but they expect it to pay back by allowing them to get to production on Essex faster.
    • Deploying on XenServer
    • “Devs move fast, Ops not so much.”  Trying to not get behind.
  • Rackspace Cloud Builders (RCB) is running major releases being run through an automated test suite. The verified releases are being published to https://github.com/cloudbuilders (note: Crowbar is pulling our OpenStack bits from this repo).
  • Dell commented that our customers are using Crowbar primarily pilots – they are learning how to use OpenStack
    • Said they have >10 customer deployments pending
    • ATT is using OpenSource version of Crowbar
    • Need for Keystone and Dashboard were considered essential additions to Diablo
  • Hypervisors
    • KVM is considered the top one for now
    • Libvirt (which uses KVM) also supports LXE which people found to be interesting
    • XenServer via XAPI are also popular
    • No so much activity on ESX & HyperV
    • We talked about why some hypervisors are more popular – it’s about the node agent architecture of OpenStack.
  • Storage
    • NetApp via Nova Volume appears to be a popular block storage
  • Keystone / Dashboard
    • Customers want both together
    • Including keystone/dashboard was considered essential in Diablo. It was part of the reason why Diablo Final was delayed.
    • HP is not using dashboard
OpenStack API
  • Members of the Audience made comments that we need to deprecate the EC2 APIs (because it does not help OpenStack long term to maintain EC2 APIs over its own).  [1/5 Note: THIS IS NOT OFFICIAL POLICY, it is a reflection of what was discussed]
  • HP started on EC2 API but is moving to the OpenStack API

Meetup Housekeeping

  • Next meeting is Tuesday 1/10 and sponsored by SUSE (note: Tuesday is just for this January).  Topic TBD.
  • We’ve got sponsors for the next SIX meetups! Thanks for Dell (my employeer), Rackspace, HP, SUSE, Canonical and PuppetLabs for sponsoring.
  • We discussed topics for the next meetings (see the post image). We’re going to throw it to a vote for guidance.
  • The OSATX tag is also being used by Occupy San Antonio.  Enjoy the cross chatter!

OpenStack Seattle Meetup 11/30 Notes

We had an informal OpenStack meetup after the Opscode Summit in Seattle.

This turned out to be a major open cloud gab fest! In addition to Dell OpenStack leads (Greg and I), we had the Nova Project Technical Lead (PTL, Vish Ishaya, @vish), HP’s Cloud Architect (Alex Howells, @nixgeek), Opscode OpenStack cookbook master (Matt Ray, @mattray). We were joined by several other Chef Summit attendees with OpenStack interest including a pair of engineers from Spain.

We’d planned to demo using Knife-OpenStack against the Crowbar Diablo build.  Unfortunately, the knife-openstack is out of date (August 15th?!).  We need Keystone support.  Anyone up for that?

Highlights

There’s no way I can recapture everything that was said, but here are some highlights I jotted down the on the way home.

  • After the miss with Keystone and the Diablo release, solving the project dependency problem is an important problem. Vish talked at length about the ambiguity challenge of Keystone being required and also incubated. He said we were not formal enough around new projects even though we had dependencies on them. Future releases, new projects (specifically, Quantum) will not be allowed to be dependencies.
  • The focus for Essex is on quality and stability. The plan is for Essex to be a long-term supported (LTS) release tied to the Ubuntu LTS. That’s putting pressure on all the projects to ensure quality, lock features early, and avoid unproven dependencies.
  • There is a lot of activity around storage and companies are creating volume plug-ins for Nova. Vish said he knew of at least four.
  • Networking has a lot of activity. Quantum has a lot of activity, but may not emerge as a core project in time for Essex. There was general agreement that Quantum is “the killer app” for OpenStack and will take cloud to the next level.  The Quantum Open vSwitch implementaiton is completely open source and free. Some other plugins may require proprietary hardware and/or software, but there is definitely a (very) viable and completely open source option for Quantum networking.
  • HP has some serious cloud mojo going on. Alex talked about defects they have found and submitted fixes back to core. He also hinted about some interesting storage and networking IP that’s going into their OpenStack deployment. Based on his comments, I don’t expect those to become public so I’m going to limit my observations about them here.
  • We talked about hypervisors for a while. KVM and XenServer (via XAPI) were the primary topics. We did talk about LXE & OpenVZ as popular approaches too. Vish said that some of the XenServer work is using Xen Storage Manager to manage SAN images.
  • Vish is seeing a constant rise in committers. It’s hard to judge because some committers appear to be individuals acting on behalf of teams (10 to 20 people).

Note: cross posted on the OpenStack Blog.

Reminder: 12/8 Meetup @ Austin!

Missed this us in Seattle? Join us at the 12/8 OpenStack meetup in Austin co-hosted by Dell and Rackspace.

Based on our last meetup, it appears deployment is a hot topic, so we’ll kick off with that – bring your experiences, opinions, and thoughts! We’ll also open the floor to other OpenStack topics that would be discussed – open technical and business discussions – no commercials please!

We’ll also talk about organizing future OpenStack meet ups! If your company is interested in sponsoring a future meetup, find Joseph George at the meetup and he can work with you on details.

Rackspace unveils OpenStack reference architecture & private cloud offering

Yesterday, Rackspace Cloud Builders unveiled both their open reference architecture (RA) and a private cloud offering (on GigaOM) based upon the RA.  The RA (which is well aligned with our Dell OpenStack RA) does a good job laying out the different aspects of an OpenStack deployment.  It also calls for the use of Dell C6100 servers and the open source version of Crowbar.

The Rackspace RA and Crowbar deployment barclamps share the same objective: sharing of best practices for OpenStack operations.

Over the last 12+ months, my team at Dell has had the opportunity to work with many customers on OpenStack deployment designs.  While no two of these are identical, they do share many similarities.  We are pleased to collaborate with Rackspace and others on capturing these practices as operational code (or “opscode” if you want a reference to the Chef cookbooks that are an intrinsic part of Crowbar’s architecture).

In our customer interactions, we hear clearly that Crowbar must remain flexible and ready to adapt to both customer on-site requirements and evolution within the OpenStack code base.  You are also telling us that there is a broader application space for Crowbar and we are listening to that too.

I believe that it will take some time for the community and markets to process today’s Rackspace announcements.  Rackspace is showing strong leadership in both sharing information and commercialization around OpenStack.  Both of these actions will drive responses from the community members.

Notes from 10/27 OpenStack Austin Meetup (via Stephen Spector)

Stephen Spector (now a Dell Services employee!) gave me permission to repost his excellent notes from the first OpenStack Austin (#OSATX) Meetup Group.

Here are his notes:

[Stephen] wanted to update everyone on the Austin OpenStack Meetup last night at the Austin TechRanch sponsored by Joseph and Rob (that’s me!) of the Dell OpenStack team (I think I got that right?). You can find all the tweets from the event at https://twitter.com/#!/search/%23osatx as we created a new hashtag for tweeting during the event, #osatx.

Here are some highlights from the event:

  • About 60 or so attendees with a good amount from Dell (Barton George, Logan McCloud)and Rackspace, Opscode (Matt Ray), Puppet Labs, SUSE talked about their OpenStack commitment (http://t.co/bBnIO7xv), and Ubuntu folks as well
  • Jon Dickinson who is the Project Technical Lead for Swift (Object Storage) was there and presented information on the current Swift offering; It is interesting to note that Swift releases continuously when most of OpenStack releases during the 6 month development cycle like Nova (Compute)
  • Stephen and Jim Plamondon from Rackspace presented information on the overall community and talked about the announcement yesterday from Internap about their Compute public cloud and the information on the MercadoLibre 600 Node Compute cloud running their business:

“With 58 million users of MercadoLibre.com and growing rapidly, we need to provide our teams instant access to computing resources without heavy administrative layers. With OpenStack, our internal users can instantly provision what they need without having to wait for a system administrator,” said Alejandro Comisario, Infrastructure Senior Engineer, MercadoLibre, the largest online trading platform in Latin America. “With our success running OpenStack Compute in production, we plan to roll OpenStack Diablo out more broadly across the company, and have appreciated the community support in this venture, especially through the OpenStack Forums, where we are also global moderators.”

  • Discussion on the OpenStack API Issue which is a significant open issue at this time – should OpenStack focus on creating an API specification and then let multiple implementations of that API move forward or build 1 implementation of the API as official OpenStack (see my post for more on this).
  • Greg Althaus gave a demo of the Nova Dashboard
  • Future Meetings
  • Three organizations have offered to help host (pizza $ and TechRanch space $) but we always need more!  You can offer to sponsor via the meetup site.
  • There will be future OpenStack Austin Meetups so sign up for the group and you’ll be notified automatically.

Pictures…

Continue reading

Dell Crowbar Project: Open Source Cloud Deployer expands into the Community

Note: Cross posted on Dell Tech Center Blogs.

Background: Crowbar is an open source cloud deployment framework originally developed by Dell to support our OpenStack and Hadoop powered solutions.  Recently, it’s scope has increased to include a DevOps operations model and other deployments for additional cloud applications.

It’s only been a matter of months since we open sourced the Dell Crowbar Project at OSCON in June 2011; however, the progress and response to the project has been over whelming.  Crowbar is transforming into a community tool that is hardware, operating system, and application agnostic.  With that in mind, it’s time for me to provide a recap of Crowbar for those just learning about the project.

Crowbar started out simply as an installer for the “Dell OpenStack™-Powered Cloud Solution” with the objective of deploying a cloud from unboxed servers to a completely functioning system in under four hours.  That meant doing all the BIOS, RAID, Operations services (DNS, NTP, DHCP, etc.), networking, O/S installs and system configuration required creating a complete cloud infrastructure.  It was a big job, but one that we’d been piecing together on earlier cloud installation projects.  A key part of the project involved collaborating with Opscode Chef Server on the many system configuration tasks.  Ultimately, we met and exceeded the target with a complete OpenStack install in less than two hours.

In the process of delivering Crowbar as an installer, we realized that Chef, and tools like it, were part of a larger cloud movement known as DevOps.

The DevOps approach to deployment builds up systems in a layered model rather than using packaged images.  This layered model means that parts of the system are relatively independent and highly flexible.  Users can choose which components of the system they want to deploy and where to place those components.  For example, Crowbar deploys Nagios by default, but users can disable that component in favor of their own monitoring system.  It also allows for new components to identify that Nagios is available and automatically register themselves as clients and setup application specific profiles.  In this way, Crowbar’s use of a DevOps layered deployment model provides flexibility for BOTH modularized and integrated cloud deployments.

We believe that operations that embrace layered deployments are essential for success because they allow our customers to respond to the accelerating pace of change.  We call this model for cloud data centers “CloudOps.”

Based on the flexibility of Crowbar, our team decided to use it as the deployment model for our Apache™ Hadoop™ project (“Dell | Apache Hadoop Solution”).  While a good fit, adding Hadoop required expanding Crowbar in several critical ways.

  1. We had to make major changes in our installation and build processes to accommodate multi-operating system support (RHEL 5.6 and Ubuntu 10.10 as of Oct 2011).
  2. We introduced a modularization concept that we call “barclamps” that package individual layers of the deployment infrastructure.  These barclamps reach from the lowest system levels (IPMI, BIOS, and RAID) to the highest (OpenStack and Hadoop).

Barclamps are a very significant architecture pattern for Crowbar:

  1. They allow other applications to plug into the framework and leverage other barclamps in the solution.  For example, VMware created a Cloud Foundry barclamp and Dream Host has created a Ceph barclamp.  Both barclamps are examples of applications that can leverage Crowbar for a repeatable and predictable cloud deployment.
  2. They are independent modules with their own life cycle.  Each one has its own code repository and can be imported into a live system after initial deployment.  This allows customers to expand and manage their system after initial deployment.
  3. They have many components such as Chef Cookbooks, custom UI for configuration, dependency graphs, and even localization support.
  4. They offer services that other barclamps can consume.  The Network barclamp delivers many essential services for bootstrapping clouds including IP allocation, NIC teaming, and node VLAN configuration.
  5. They can provide extensible logic to evaluate a system and make deployment recommendations.  So far, no barclamps have implemented more than the most basic proposals; however, they have the potential for much richer analysis.

Making these changes was a substantial investment by Dell, but it greatly expands the community’s ability to participate in Crowbar development.  We believe these changes were essential to our team’s core values of open and collaborative development.

Most recently, our team moved Crowbar development into the open.  This change was reflected in our work on OpenStack Diablo (+ Keystone and Dashboard) with contributions by Opscode and Rackspace Cloud Builders.  Rather than work internally and push updates at milestones, we are now coding directly from the Crowbar repositories on Github.  It is important to note that for licensing reasons, Dell has not open sourced the optional BIOS and RAID barclamps.  This level of openness better positions us to collaborate with the crowbar community.

For a young project, we’re very proud of the progress that we’ve made with Crowbar.  We are starting a new chapter that brings new challenges such as expanding community involvement, roadmap transparency, and growing Dell support capabilities.  You will also begin to see optional barclamps that interact with proprietary and licensed hardware and software.  All of these changes are part of growing Crowbar in framework that can support a vibrant and rich ecosystem.

We are doing everything we can to make it easy to become part of the Crowbar community.  Please join our mailing list, download the open source code or ISO, create a barclamp, and make your voice heard.  Since Dell is funding the core development on this project, contacting your Dell salesperson and telling them how much you appreciate our efforts goes a long way too.

Dell Crowbar to deploy OpenStack Diablo Cloud

Direction in the Cloud

Photo by JB George

This week, some of the Crowbar/Dell OpenStack-Powered Cloud team, plus Matt Ray from Opscode, have been working with our partners at Rackspace in San Antonio (see Opscode post about collaboration). Our target is to have Crowbar deliver a core Diablo deployment by the October 2011 design conference (sponsored in part by Dell). This is a collaborative effort and we invite community participation – we are trying to be open and communicative (via the Crowbar listserv) while also respecting that there is a mountain of work if we are to meet deadlines.

We are doing the work in the open on the Crowbar Github so you have access to the very latest capabilities and it also means that the head the Crowbar may be unstable while we add capabilities. We feel like this is an important trade off because it allows us to keep up with the rapid pace of development in OpenStack (and other projects). This is the motivation for the recent modularization work and will continue to be a feature driver for Crowbar enhancements because it allows Crowbar users to easily bring in updated bits.

 

Crowbar modularized: latest changes that make clouds even easier to create, update, and maintain

In the last week, my team at Dell completed a major refactoring of Crowbar that significantly improves our ability to bring in community contributions and field customizations.  Today, we merged it into Crowbar’s public repo(s).

From the very first versions, our objective for Crowbar was to create the fastest and most reliable cloud deployments. Along the way, we realized Crowbar’s true potential lay in embracing DevOps as an operational model for maintaining clouds. That meant building up cloud deployments in layers from pieces that we call barclamps (extensions of Chef cookbooks). Our first version, centered on OpenStack Cactus, leveraged barclamps but was still created as a single system. This unified system was a huge step forward in cloud deployments, but did not live up to our CloudOps vision of continuous delivery.

In this version, each Crowbar barclamp is an independent delivery unit that can be integrated before, while or after installing Crowbar.

The core of the change is each barclamp, including the most core ones, are stored in independent code repositories. Putting the code into distinct repos means that each barclamp can have its own life cycle, its own maintainer site and its own dependency tree. This modularization allows customers to manage their Crowbar deployments with a very fine brush: they may choose to customize parts of the system, they could lock components to specific tag and they can bring in barclamps from other vendors.

While the core barclamps are automatically integrated into the Crowbar build using git submodules; other barclamps are installed into the system as needed. This allows you to pull in the suite of OpenStack barclamps at build time or to wait until your Crowbar system is running before installing. Once you install a barclamp, you are able to retrieve an updated barclamp and reapply it to the system.

This feature gives you the ability to 1) choose exactly what you want to include and 2) perform field updates to a live Crowbar system.

Let’s look at some examples:

  1. The Cloud Foundry barclamp can be sourced Cloud Foundry instead of bundled into the Crowbar repository. This allows the team working on the cloud application to take ownership for their own deployment. As a continuous delivery proponent, I believe strongly that the development team should be responsible for ensuring that their code is deployable (refer to my OpenStack “Deployer API” blue print attempting to codify this).
  2. DreamHost, maintainers of Ceph Storage, can maintain their own local barclamp repos for OpenStack that are cloned from our community Swift barclamp. This allows them to innovate and customize OpenStack deployments for their business and choose which updates to merge back to the community.
  3. Rackspace Cloud Builders can work on the most leading edge OpenStack features and maintaining workable deployments on branches. As the code stabilizes, they simply merge in their changes.
  4. Dell BIOS and RAID barclamps only support the PowerEdge C line today. When we offer PowerEdge R support, you will be able to install or update the barclamps to add that capability. If another hardware vendor creates a barclamp for their hardware then you can install that into your existing system.

I believe that these changes to Crowbar are a huge step forwards on our journey of creating a community supportable Open Operations framework. I hope that you are as excited as I am about these changes.

I encourage you to take the first step by trying out Crowbar and, ultimately, writing your own barclamps.

Post Scripts:

  • In addition to the modularization, the updated code includes RHEL as a deployment platform. At present, you must choose to be either RHEL or Ubuntu at build time.
  • We have enhanced the network barclamp to describe connections as more abstract connections, called conduits, between nodes. This is a powerful change, but requires some understanding before you start making changes.
  • We have only begun testing the change as of 9/12, we expect the system to be fully stabilized by 10/3. If you are not willing to deal with bugs then I recommend building the Crowbar “v1.0” tag (or using the ISOs from our July launch).

WHIR Webinar Notes: Prying Open the Cloud with Dell Crowbar & OpenStack

Panelists: Me (@zehicle) & Joseph B. George (@jbgeorge), Director, Cloud and Big Data Solutions, Dell

Moderator: Liam Eagle (@theWHIR) , Editor-in-Chief, Web Host Industry Review

Wow, this Webinar was an hour of OpenStack insights (see the whole thing). If you don’t have the hour then you can use my time line nodes to jump to what you want to hear.

  • 2:50: Presentation Starts (introductions are over)
  • 3:40: Joseph coins the word “dynormous” for dynamic & large scale clouds
  • 4:40: Customers want to know how they are going to maintain a cloud
  • 4:50: Customers don’t want a 9 month cycle for features, want it faster. DevOps gives us the flexibility to meet our customer needs as quickly as they want to.
  • 7:11: Massive scalability… their (Rackspace & NASA) business is about scale
  • 8:00: Rackspace and NASA started from the beginning to build a community
  • 8:50: We have the data that this has staying power
  • 10:10: We see a lot of companies joining in the community
  • 11:56: Shout out to Opscode Chef
  • 12:40: From bare metal to a fully functioning cloud in under 2 hours. Crowbar allows you to introduce new elements into the environment
  • 13:40: Crowbar leverages our experience with cloud deployments
  • 14:33: Dell was the only provider there from day 1. We have the most experience.
  • 17:27: DevOps Poll
  • 18:40: DevOps is a significant trend that you should consider. Hosters have a lot of operational chops.
  • 19:34: There are a lot of right ways to do cloud. You need to pick what’s best for your business model
  • 20:23: We could get hardware and software, but operational expertise was missing.
  • 21:33: We’re more making the complexity of a cloud go away. We are getting our customers a head start. We are chipping away at the learning curve.
  • 22:05: The cloud is always ready, never finished. Cloud is an ongoing operational environment: DevOps!
  • 23:30: Crowbar bakes a lot of operational experience into the deployment.
  • 25:17: Core tenant of DevOps: there is no single OpenStack image. Cloud is too complex. We build it in layers.
  • 26:26: Before you deploy, you can change the configuration.
  • 27:30: Barclamps are modules that execute a function. We are inviting community participation
  • 28:40: Crowbar process view – Crowbar is a “PXE state machine” is a very simplified description.
  • 29:57: You can go through a tuning cycle where you can get it working, make sure it’s right, flush and reset. That ensures you have an automated system.
  • 30:34: Screen shots with descriptions
  • 33:25: Event the core state machine that runs Crowbar is deployed as a barclamp
  • 35:00: You can download OpenStack and install it yourself from our github. We don’t want to talk about OpenStack, we want to DO OpenStack.
  • Poll Results (see to the right)
  • 38:00: Online resources
  • 40:00: Question 1: Timeline for RHEL. Answer: RHEL is part of Hadoop, will make it into OpenStack by end of year (or sooner based on market demand)
  • 42:17: Question 2: What led Dell to get involved in OpenStack? Answer: It’s about experience. We like being able to fix and change if we needed. There is a lot of active community
  • 45:30: Question 3: How does a hardware maker play with open source software? Answer: It’s a solution for us. We wanted to make sure that people cloud deploy the software. Adding DevOps takes it to another level.
  • 48:00 Question 4: What elements of Diablo are most exciting? Answer: Keystone (centralized authentication) is a big deal. Networking changes that “bust the top” of the networking hurdles.
  • 50:25: Question 5: Where is OpenStack going long term? Answer: We’re pleasantly surprised about how much it’s picked up. We’ll see more standards in the community. We have high hopes for OpenStack and have invested heavily. We’ll see more as-a-service capabilities to build on a common infrastructure: both open and commercial.
  • 52:47: Question 6: What’s the biggest barrier to operating at scale? Answer: learning how to operate is the biggest hurdle. We took a learning approach to help customers get started. We are hosting a training with Rackspace.
  • 55:00: Question 7: Where does Dell and Rackspace overlap? Answer: We see Rackspace Cloud Builders that the premier experts. Dell Services is involved with all of it. Dell takes the phone call and deals with our customers directly.