Whew….Yesterday, Dell announced TWO OpenStack block storage capabilities (Equallogic & Ceph) for our OpenStack Essex Solution (I’m on the Dell OpenStack/Crowbar team) and community edition. The addition of block storage effectively fills the “persistent storage” gap in the solution. I’m quadrupally excited because we now have:
both Nova drivers’ code is in the open at part of our open source Crowbar work
Frankly, I’ve been having trouble sitting on the news until Dell World because both features have been available in Github before the announcement (EQLX and Ceph-Barclamp). Such is the emerging intersection of corporate marketing and open source.
As you may expect, we are delivering them through Crowbar; however, we’ve already had customers pickup the EQLX code and apply it without Crowbar.
The Equallogic+Nova Connector
If you are using Crowbar 1.5 (Essex 2) then you already have the code! Of course, you still need to have the admin information for your SAN – we did not automate the configuration of the storage system, but the Nova Volume integration.
We have it under a split test so you need to do the following to enable the configuration options:
Install OpenStack as normal
Create the Nova proposal
Enter “Raw” Attribute Mode
Change the “volume_type” to “eqlx”
The Equallogic options should be available in the custom attribute editor! (of course, you can edit in raw mode too)
Usage note: the integration uses SSH sessions. It has been performance tested but not been tested at scale.
The Ceph+Nova Connector
The Ceph capability includes a Ceph barclamp! That means that all the work to setup and configure Ceph is done automatically done by Crowbar. Even better, their Nova barclamp (Ceph provides it from their site) will automatically find the Ceph proposal and link the components together!
This is not puff interview – We spent an hour together and Rafael did not shy away from asking hard questions like “Why did Dell jump into OpenStack?” and “is VMware a threat to OpenStack?” Rather than posting the whole transcript (it’s posted here), I’m including the questions (as a teaser) below. There is some real meat in these answers about OpenStack, Dell, Crowbar and challenges facing the project.
WARNING: My job is engineering, not marketing. You may find my answers (which are MY OWN ANSWERS) to be more direct that you are expecting. If you find yourself needing additional circumlocution then simply close your browser and move on.
Dell’s interest in OpenStack has been very pragmatic. OpenStack is something we really see a market need for.
Rackspace … runs on OpenStack pretty much off trunk … That’s exactly the type of vibrant community we want to see. At the same time, there is a growing community that wants to use OpenStack distributions with support, certifications and they are fine with being 6 months behind OpenStack off trunk. That’s good, and we want that shadow, we want that combination of pure minded early adopters and less sophisticated OpenStack users both working together.
We are working with different partners to bring OpenStack to different customers in different ways. It is confusing. Your question about Dell Crowbar was right … it is targeted at a certain class of users, and I don’t want enterprise customers who expect a lot of shiny chrome and zero touch. That’s not the target by now for Dell Crowbar. We definitely need that sort of magic decoder page to help customers understand our commercial offering.
Dell is one of the very early contributors to OpenStack. Why is Dell engaging in this project?
How does Dell contribute to OpenStack?
Let’s talk a bit about Dell Crowbar, your team’s deployment mechanism for OpenStack.
Let’s talk a bit about OpenStack raw vs. OpenStack distributions.
What are the biggest barriers to OpenStack adoption as of now?
What does a customer specifically need to do when moving from OpenStack Essex to Folsom for example?
My next question is around proof of concept versus production, Rob. How are customers using OpenStack and can you give examples for both scenarios?
I hear very often two different statements: “Open Stack is an alternative to Amazon.” The other is: “OpenStack is an alternative to VMware … maybe, hopefully in two or three years from now.” Which of both statements is true?
How do you view VMware joining OpenStack. Is it a threat to OpenStack or does VMware add value to the project?
Let us speak about market adoption. Who are the early adopters of OpenStack? And when do you expect OpenStack to hit the tipping point for mass market adoption?
Rob, for all those interested in Dell’s commercial offering around OpenStack … can you give a brief overview?
Dell TechCenter that provides customers an overview over our OpenStack offering: Dell Crowbar as our DevOps tool in its various shapes and forms, OpenStack distros we support … cloud services we build around OpenStack … hardware capabilities optimized for OpenStack.
What are the challenges for the OpenStack Board of Directors?
On the eve of the OpenStack design summit, it’s worth noting that the Crowbar team at Dell cut our final Essex release (aka Betty) last week. We’ve also committed the initial Folsom deployment scripts to the 1x development trunk under “feature/folsom” if you are doing Crowbar builds from DevTool (see bit.ly/crowbardevtool).
Andi Abes is presenting about the new Pull From Source (pfs) feature at the Summit on Monday. There’s a feature branch for that too and I’m going to check with him and try to post and ISO for that too.
If you are coming to the OpenStack summit in San Diego next week then please find me at the show! I want to hear from you about the Foundation, community, OpenStack deployments, Crowbar and anything else. Oh, and I just ordered a handful of Crowbar stickers if you wanted some CB bling.
Today my boss at Dell, John Igoe, is part of announcing of the report from the TechAmerica Federal Big Data Commission (direct pdf), I was fully expecting the report to be a real snoozer brimming with corporate synergies and win-win externalities. Instead, I found myself reading a practical guide to applying Big Data to government. Flipping past the short obligatory “what is…” section, the report drives right into a survey of practical applications for big data spanning nearly every governmental service. Over half of the report is dedicated to case studies with specific recommendations and buying criteria.
Ultimately, the report calls for agencies to treat data as an asset. An asset that can improve how government operates.
There are a few items that stand out in this report:
Clear tables of case studies on page 16 and characteristics on page 11 that help pin point a path through the options.
Definitive advice to focus on a single data vector (velocity, volume or variety) for initial success on page 28 (and elsewhere)
I strongly agree with one repeated point in the report: although there is more data available, our ability to comprehend this data is reduced. The sheer volume of examples the report cites is proof enough that agencies are, and will be continue to be, inundated with data.
One short coming of this report is that it does not flag the extreme storage of data scientists. Many of the cases discussed assume a ready army of engineers to implement these solutions; however, I’m uncertain how the government will fill positions in a very tight labor market. Ultimately, I think we will have to simply open the data for citizen & non-governmental analysis because, as the report clearly states, data is growing faster than capability to use it.
I commend the TechAmerica commission for their Big Data clarity: success comes from starting with a narrow scope. So the answer, ironically, is in knowing which questions we want to ask.
“Double wide” is not a term I’ve commonly applied to servers, but that’s one of the cool things about this new class of servers that Dell, my employer, started shipping today.
My team has been itching for the chance to start cloud and big data reference architectures using this super dense and flexible chassis. You’ll see it included in our next Apache Hadoop release and we’ve already got customers who are making it the foundation of their deployments (Texas Adv Computing Center case study).
If you’re tracking the latest big data & cloud hardware then the Dell PowerEdge C8000 is worth some investigation.
Basically, the Dell C8000 is a chassis that holds a flexible configuration of compute or storage sleds. It’s not a blade frame because the sleds minimize shared infrastructure. In our experience, cloud customers like the dedicated i/o and independence of sleds (as per the Bootstrapping clouds white paper). Those attributes are especially well suited for Hadoop and OpenStack because they support a “flat edges” and scale out design. While i/o independence is valued, we also want shared power infrastructure and density for efficiency reasons. Using a chassis design seems to capture the best of both worlds.
The novelty for the Dell PowerEdge C8000 is that the chassis are scary flexible. You are not locked into a pre-loaded server mix.
There are a plethora of sled choices so that you can mix choices for power, compute density and spindle counts. That includes double-wide sleds positively brimming with drives and expanded GPU processers. Drive density is important for big data configurations that are disk i/o hungry; however, our experience is the customer deployments are highly varied based on the planned workload. There are also significant big data trends towards compute, network, and balanced hardware configurations. Using the C8000 as a foundation is powerful because it can cater to all of these use-case mixes.
If registered, you have 8 votes to allocate as you wish. You will get a link via email – you must use that link.
Joseph B George and I are cross-blogging this post because we are jointly seeking your vote(s) for individual member seats on the OpenStack Foundation board. This is key point in the OpenStack journey and we strongly encourage eligible voters to participate no matter who you vote for! As we have said before, success of the Foundation governance process matters just as much as the code because it ensures equal access and limits forking.
We think that OpenStack succeeds because it is collaboratively developed. It is essential that we select board members who have a proven record of community development, a willingness to partner and have demonstrated investment in the project.
Our OpenStack vision favors production operations by being operator, user and ecosystem focused. If elected, we will represent these interests by helping advance deployability, API specifications, open operations and both large and small scale cloud deployments.
Of course, we’re asking for you to consider for both of us; however, if you want to focus on just one then here’s the balance between us. Rob (bio) is a technologist with deep roots in cloud technology, data center operations and open source. Joseph is a business professional with experience new product introduction and enterprise delivery.
Not sure if you can vote? If you registered as an individual member then your name should be on the voting list. In that case, you can vote between 8/20 and 8/24.
I could not be happier with the results Crowbar collaborators and my team at Dell achieved around the 1st Crowbar design summit. We had great discussions and even better participation.
The attendees represented major operating system vendors, configuration management companies, OpenStack hosting companies, OpenStack cloud software providers, OpenStack consultants, OpenStack private cloud users, and (of course) a major infrastructure provider. That’s a very complete cross-section of the cloud community.
I knew from the start that we had too little time and, thankfully, people were tolerant of my need to stop the discussions. In the end, we were able to cover all the planned topics. This was important because all these features are interlocked so discussions were iterative. I was impressed with the level of knowledge at the table and it drove deep discussion. Even so, there are still parts of Crowbar that are confusing (networking, late binding, orchestration, chef coupling) even to collaborators.
In typing up these notes, it becomes even more blindingly obvious that the core features for Crowbar 2 are highly interconnected. That’s no surprise technically; however, it will make the notes harder to follow because of knowledge bootstrapping. You need take time and grok the gestalt and surf the zeitgeist.
Collaboration Invitation: I wanted to remind readers that this summit was just the kick-off for a series of open weekly design (Tuesdays 10am CDT) and coordination (Thursdays 8am CDT) meetings. Everyone is welcome to join in those meetings – information is posted, recorded, folded, spindled and mutilated on the Crowbar 2 wiki page.
These notes are my reflection of the online etherpad notes that were made live during the meeting. I’ve grouped them by design topic.
We are refactoring Crowbar at this time because we have a collection of interconnected features that could not be decoupled
Some items (Database use, Rails3, documentation, process) are not for debate. They are core needs but require little design.
There are 5 key topics for the refactor: online mode, networking flexibility, OpenStack pull from source, heterogeneous/multi operating systems, being CDMB agnostic
Due to time limits, we have to stop discussions and continue them online.
We are hoping to align Crowbar 2 beta and OpenStack Folsom release.
Online / Connected Mode
Online mode is more than simply internet connectivity. It is the foundation of how Crowbar stages dependencies and components for deploy. It’s required for heterogeneous O/S, pull from source and it has dependencies on how we model networking so nodes can access resources.
We are thinking to use caching proxies to stage resources. This would allow isolated production environments and preserves the run everything from ISO without a connection (that is still a key requirement to us).
Suse’s Crowbar fork does not build an ISO, instead it relies on RPM packages for barclamps and their dependencies.
Pulling packages directly from the Internet has proven to be unreliable, this method cannot rely on that alone.
Install From Source
This feature is mainly focused on OpenStack, it could be applied more generally. The principals that we are looking at could be applied to any application were the source code is changing quickly (all of them?!). Hadoop is an obvious second candidate.
We spent some time reviewing the use-cases for this feature. While this appears to be very dev and pre-release focused, there are important applications for production. Specifically, we expect that scale customers will need to run ahead of or slightly adjacent to trunk due to patches or proprietary code. In both cases, it is important that users can deploy from their repository.
We discussed briefly our objective to pull configuration from upstream (not just OpenStack, but potentially any common cookbooks/modules). This topic is central to the CMDB agnostic discussion below.
The overall sentiment is that this could be a very powerful capability if we can manage to make it work. There is a substantial challenge in tracking dependencies – current RPMs and Debs do a good job of this and other configuration steps beyond just the bits. Replicating that functionality is the real obstacle.
CMDB agnostic (decoupling Chef)
This feature is confusing because we are not eliminating the need for a configuration management database (CMDB) tool like Chef, instead we are decoupling Crowbar from the a single CMDB to a pluggable model using an abstraction layer.
It was stressed that Crowbar does orchestration – we do not rely on convergence over multiple passes to get the configuration correct.
We had strong agreement that the modules should not be tightly coupled but did need a consistent way (API? Consistent namespace? Pixie dust?) to share data between each other. Our priority is to maintain loose coupling and follow integration by convention and best practices rather than rigid structures.
The abstraction layer needs to have both import and export functions
Crowbar will use attribute injection so that Cookbooks can leverage Crowbar but will not require Crowbar to operate. Crowbar’s database will provide the links between the nodes instead of having to wedge it into the CMDB.
In 1.x, the networking was the most coupled into Chef. This is a major part of the refactor and modeling for Crowbar’s database.
There are a lot of notes captured about this on the etherpad – I recommend reviewing them
Heterogeneous OS (bare metal provisioning and beyond)
This topic was the most divergent of all our topics because most of the participants were using some variant of their own bare metal provisioning project (check the etherpad for the list).
Since we can’t pack an unlimited set of stuff on the ISO, this feature requires online mode.
Most of these projects do nothing beyond OS provisioning; however, their simplicity is beneficial. Crowbar needs to consider users who just want a stream-lined OS provisioning experience.
Late binding is a programming term that I’ve commandeered for Crowbar’s DevOps design objectives.
We believe that late binding is a best practice for CloudOps.
Understanding this concept is turning out to be an important but confusing differentiation for Crowbar. We’ve effectively inverted the typical deploy pattern of building up a cloud from bare metal; instead, Crowbar allows you to build a cloud from the top down. The difference is critical – we delay hardware decisions until we have the information needed to do the correct configuration.
If Late Binding is still confusing, the concept is really very simple: “we hold off all work until you’ve decided how you want to setup your cloud.”
Late binding arose from our design objectives. We started the project with a few critical operational design objectives:
Treat the nodes and application layers as an interconnected system
Realize that application choices should drive down the entire application stack including BIOS, RAID and networking
Expect the entire system to be in a constantly changing so we must track state and avoid locked configurations.
We’d seen these objectives as core tenets in hyperscale operators who considered bare metal and network configuration to be an integral part of their application deployment. We know it is possible to build the system in layers that only (re)deploy once the application configuration is defined.
We have all this great interconnected automation! Why waste it by having to pre-stage the hardware or networking?
In cloud, late binding is known as “elastic computing” because you wait until you need resources to deploy. But running apps on cloud virtual machines is simple when compared to operating a physical infrastructure. In physical operations, RAID, BIOS and networking matter a lot because there are important and substantial variations between nodes. These differences are what drive late binding as a one of Crowbar’s core design principles.