As Docker rises above (and disrupts) clouds, I’m thinking about their community landscape

Watching the lovefest of DockerConf last week had me digging up my April 2014 “Can’t Contain(erize) the Hype” post.  There’s no doubt that Docker (and containers more broadly) is delivering on it’s promise.  I was impressed with the container community navigating towards an open platform in RunC and vendor adoption of the trusted container platforms.

I’m a fan of containers and their potential; yet, remotely watching the scope and exuberance of Docker partnerships seems out of proportion with the current capabilities of the technology.

The latest update to the Docker technology, v1.7, introduces a lot of important network, security and storage features.  The price of all that progress is disruption to ongoing work and integration to the ecosystem.

There’s always two sides to the rapid innovation coin: “Sweet, new features!  Meh, breaking changes to absorb.”

Docker Ecosystem Explained

Docker Ecosystem Explained

There remains a confusion between Docker the company and Docker the technology.  I like how the chart (right) maps out potential areas in the Docker ecosystem.  There’s clearly a lot of places for companies to monetize the technology; however, it’s not as clear if the company will be able to secede lucrative regions, like orchestration, to become a competitive landscape.

While Docker has clearly delivered a lot of value in just a year, they have a fair share of challenges ahead.  

If OpenStack is a leading indicator, we can expect to see vendor battlegrounds forming around networking and storage.  Docker (the company) has a chance to show leadership and build community here yet could cause harm by giving up the arbitrator role be a contender instead.

One thing that would help control the inevitable border skirmishes will be clear definitions of core, ecosystem and adjacencies.  I see Docker blurring these lines with some of their tools around orchestration, networking and storage.  I believe that was part of their now-suspended kerfuffle with CoreOS.

Thinking a step further, parts of the Docker technology (RunC) have moved over to Linux Foundation governance.  I wonder if the community will drive additional shared components into open governance.  Looking at Node.js, there’s clear precedent and I wonder if Joyent’s big Docker plans have them thinking along these lines.

StackEngine Docker on Metal via RackN Workload for OpenCrowbar

6/19: This was CROSS POSTED WITH STACKENGINE

In our quest for fast and cost effective container workloads, RackN and StackEngine have teamed up to jointly develop a bare metal StackEngine workload for the RackN Enterprise version of OpenCrowbar.  Want more background on StackEngine?  TheNewStack.io also did a recent post covering StackEngine capabilities.

While this work is early, it is complete enough for field installs.  We’d like to include potential users in our initial integration because we value your input.

Why is this important?  We believe that there are significant cost, operational and performance benefits to running containers directly on metal.  This collaboration is a tangible step towards demonstrating that value.

What did we create?  The RackN workload leverages our enterprise distribution of OpenCrowbar to create a ready state environment for StackEngine to be able to deploy and automate Docker container apps.

In this pass, that’s a pretty basic Centos 7.1 environment that’s hardware and configured.  The workload takes your StackEngine customer key as the input.  From there, it will download and install StackEngine on all the nodes in the system.  When you choose which nodes also manage the cluster, the workloads will automatically handle the cross registration.

What is our objective?  We want to provide a consistent and sharable way to run directly on metal.  That accelerates the exploration of this approach to operationalizing container infrastructure.

What is the roadmap?  We want feedback on the workload to drive the roadmap.  Our first priority is to tune to maximize performance.  Later, we expect to add additional operating systems, more complex networking and closed-loop integration with StackEngine and RackN for things like automatic resources scheduling.

How can you get involved?  If you are interested in working with a tech-preview version of the technology, you’ll need to a working OpenCrowbar Drill implementation (via Github or early access available from RackN), a StackEngine registration key and access to the RackN/StackEngine workload (email info@rackn.com or info@stackengine.com for access).

exploring Docker Swarm on Bare Metal for raw performance and ops simplicity

As part of our exploration of containers on metal, the RackN team has created a workload on top of OpenCrowbar as the foundation for a Docker Swarm on bare metal cluster.  This provides a second more integrated and automated path to Docker Clusters than the Docker Machine driver we posted last month.

It’s really pretty simple: The workload does the work to deliver an integrated physical system (Centos 7.1 right now) that has Docker installed and running.  Then we build a Consul cluster to track the to-be-created Swarm.  As new nodes are added into the cluster, they register into Consul and then get added into the Docker Swarm cluster.  If you reset or repurpose a node, Swarm will automatically time out of the missing node so scaling up and down is pretty seamless.

When building the cluster, you have the option to pick which machines are masters for the swarm.  Once the cluster is built, you just use the Docker CLI’s -H option against the chosen master node on the configured port (defaults to port 2475).

This work is intended as a foundation for more complex Swarm and/or non-Docker Container Orchestration deployments.  Future additions include allowing multiple network and remote storage options.

You don’t need metal to run a quick test of this capability.  You can test drive RackN OpenCrowbar using virtual machines and then expand to the full metal experience when you are ready.

Contact info@rackn.com for access to the Docker Swarm trial.   For now, we’re managing the subscriber base for the workload.  OpenCrowbar is a pre-req and ungated.  We’re excited to give access to the code – just ask.

The Matrix & Surrogates as an analogies for VMs, Containers and Metal

010312_1546_2012CloudOu1.jpgTrench coats aside, I used The Matrix as a useful analogy to explain visualization and containers to a non-technical friend today.  I’m interested in hearing from others if this is a helpful analogy.

Why does anyone care about virtual servers?

Virtual servers (aka virtual machines or VMs) are important because data centers are just like the Matrix.  The real world of data centers is a ugly, messy place fraught with hidden dangers and unpleasant chores.  Most of us would rather take the blue pill and live in a safe computer generated artificial environment where we can ignore those details and live in the convenient abstraction of Mega City.

Do VMs really work to let you ignore the physical stuff?

Pretty much.  For most people, they can live their whole lives within the virtual world.  They can think they are eating the steak and never try bending the spoons.

So why are containers disruptive?  

Well, it’s like the Surrogates movie.  Right now, a lot of people living in the Cloud Matrix are setting up even smaller bubbles.  They are finding that they don’t need a whole city, they can just live inside a single room.  For them, it’s more like Surrogates where people never leave their single room.

But if they never leave the container, do they need the Matrix?

No.  And that’s the disruption.  If you’ve wrapped yourself in a smaller bubble then you really don’t need to larger wrapper.

What about that messy “real world”?

It’s still out there in both cases.  Just once you are inside the inner bubble, you can’t really tell the difference.

OpenStack Vancouver six observations: partners, metal, tents, defore, brands & breakage

As always, OpenStack conferences/summits are packed with talks and discussions.  Any one of these six points could be a full post; however, I would rather post now and start discussions.  Let me know what you think!

1. Partnering Everywhere – it’s froth, not milk

Everyone is partnering with everyone! It’s a good way to appear to cover more around and appear more open. Right now, I believe these partnerships are for show and very shallow. There will be blood when money is flowing and both partners want the lion’s share.

2. Metal is Hot! attention on Ironic & MaaS

Metal is very hot topic. No surprise, but I do not think that either MaaS or Ironic have the right architecture to deal with the real complexity of automating metal in a generalized way. The consequence is that they are limited and hard to operate.

Container talks were also very hot and I believe are ultimately disruptive.  The very fact that all the container talks were overflowing is an indication of the challenges facing virtualization.

3. DefCore – Just in the Nick of Time

I think that the press and analysts were ready to proclaim that OpenStack was fragmenting and being unable to deliver the “one cloud, multiple vendors” vision. DefCore (presented as Interopability by Jonathan Bryce, DefCore shout out!) came in on the buzzer to buy us more time.

4. Big Tent Concerns – what is ecosystem & release?

Big Tent is shorthand for project governance changes that make it easier for new projects to become OpenStack projects and removes the concept of integrated releases.  The exact definition is still a work in progress.

The top concerns I have are:

  1. We cannot tell difference between community & ecosystem. We’re back to anointed projects because we’re now telling projects they have to join OpenStack to work with OpenStack.
  2. We’re changing the definition of the release but have not defined how it will change. I acknowledge that continuous release is ideal but we’re confusing people again.

5. Brands are battling – will they destroy the city?

OpenStack is hard for startups – read the full post here.  The short version is that big companies are taking up all the air.

While some are leading, others they are learning how to collaborate.  Those new to open source are slow to trust and uncertain about where to invest.  Unfortunately, we’ve created a visible contributions economy that does not reward doing the scut work so it’s no surprise that there are concerns that some of the bigger companies are free riding.

6. OpenStack is broken talks – could we reboot?  no.

It’s a sign of OpenStack’s age that Bias, Termie and others suggested we need clean slate.  Frankly, I think that OpenStack would be irrelevant by the time a rewrite was completed and it not helpful to suggest it.

What would I suggest?  I’d promote a strong core (doing!), ensure big companies collaborate on roadmap (doing!) and stop having a single node install as gate and dev reference (I’d happily help use OCB for this with partners)

PS: Apparently Neutron is not broken.

I’m very excited about the “just give me a network” work to make Neutron duplicate Nova-Net functionality.  Finally.

Docker-Machine Crowbar Driver Delivers Metal Containers

I’ve just completed a basic Docker Machine driver for OpenCrowbar.  This enables you to quickly spin-up (and down) remote Docker hosts on bare metal servers from their command line tool.  There are significant cost, simplicity and performance advantages for this approach if you were already planning to dedicate servers to container workloads.

Docker Machine

The basics are pretty simple: using Docker Machine CLI you can “create” and “rm” new Docker hosts on bare metal using the crowbar driver.  Since we’re talking about metal, “create” is really “assign a machine from an available pool.”

Behind the scenes Crowbar is doing a full provision cycle of the system including installing the operating system and injecting the user keys.  Crowbar’s design would allow operators to automatically inject additional steps, add monitoring agents and security, to the provisioning process without changing the driver operation.

Beyond Create, the driver supports the other Machine verbs like remove, stop, start, ssh and inspect.  In the case of remove, the Machine is cleaned up and put back in the pool for the next user [note: work remains on the full remove>recreate process].

Overall, this driver allows Docker Machine to work transparently against metal infrastructure along side whatever cloud services you also choose.

Want to try it out?

  1. You need to setup OpenCrowbar – if you follow the defaults (192.168.124.10 ip, user, password) then the Docker Machine driver defaults will also work. Also, make sure you have the Ubuntu 14.04 ISO available for the Crowbar provisioner
  2. Discover some nodes in Crowbar – you do NOT need metal servers to try this, the tests work fine with virtual machines (tools/kvm-slave &)
  3. Clone my Machine repo (Wde’re looking for feedback before a pull to Docker/Machine)
  4. Compile the code using script/build.
  5. Allocate a Docker Node using  ./docker-machine create –driver crowbar testme
  6. Go to the Crowbar UI to watch the node be provisioned and configured into the Docker-Machines pool
  7. Release the node using ./docker-machine rm testme
  8. Go to the Crowbar UI to watch the node be redeployed back to the System pool
  9. Try to contain your enthusiasm :)

Want More?  Linux binary & readme.

As CloudFoundry Builds Ecosystem and Utility, What Challenges Arise? (observations from CFSummit)

I’ve been on the outskirts of the CloudFoundry (CF) universe from the dawn of the project (it’s a little remembered fact that there was a 2011 Crowbar install of CloudFoundry.

openProgress and investment have been substantial and, happily, organic. Like many platforms, it’s success relies on a reasonable balance between strong opinions about “right” patterns and enough flexibility to accommodate exceptions.

From a well patterned foundation, development teams find acceleration.  This seems to be helping CloudFoundry win some high-profile enterprise adopters.

The interesting challenge ahead of the project comes from building more complex autonomous deployments. With the challenge of horizontal scale of arguably behind them, CF users are starting to build more complex architectures.  This includes dynamic provisioning of the providers (like data bases, object stores and other persistent adjacent services) and connecting to containerized “micro-services.”  (see Matt Stine’s preso)

While this is a natural evolution, it adds an order of magnitude more complexity because the contracts between previously isolated layers are suddenly not reliable.

For example, what happens to a CF deployment when the database provider is field upgraded to a new version.  That could introduce breaking changes in dependent applications that are completely opaque to the data provider.  These are hard problems to solve.

Happily, that’s exactly the discussions that we’re starting to have with container orchestration systems.  It’s also part of the dialog that I’ve been trying to drive with Functional Operations (FuncOps Preso) on the physical automation side.  I’m optimistic that CloudFoundry patterns will help make this problem more tractable.