Breaking Up is Hard To Do – Why I Believe Ops Decomposition (pt 1)

Over the summer, the RackN team took a radical step with our previous Ansible Kubernetes workload install: we broke it into pieces.  Why?  We wanted to eliminate all “magic happens here” steps in the deployment.

320px-dominos_fallingThe result, DR Kompos8, is a faster, leaner, transparent and parallelized installation that allows for pluggable extensions and upgrades (video tour). We also chose the operationally simplest configuration choice: Golang binaries managed by SystemDGolang binaries managed by SystemD.

Why decompose and simplify? Let’s talk about our hard earned ops automation battle scars that let to composability as a core value:

Back in the early OpenStack days, when the project was actually much simpler, we were part of a community writing Chef Cookbooks to install it. These scripts are just a sequence of programmable steps (roles in Ops-speak) that drive the configuration of services on each node in the cluster. There is an ability to find cross-cluster information and lookup local inventory so we were able to inject specific details before the process began. However, once the process started, it was pretty much like starting a dominoes chain. If anything went wrong anywhere in the installation, we had to reset all the dominoes and start over.

Like a dominoes train, it is really fun to watch when it works. Also, like dominoes, it is frustrating to set up and fix. Often we literally were holding our breath during installation hoping that we’d anticipated every variation in the software, hardware and environment. It is no surprise that the first and must critical feature we’d created was a redeploy command.

It turned out the the ability to successfully redeploy was the critical measure for success. We would not consider a deployment complete until we could wipe the systems and rebuild it automatically at least twice.

What made cluster construction so hard? There were a three key things: cross-node dependencies (linking), a lack of service configuration (services) and isolating attribute chains (configuration).

We’ll explore these three reasons in detail for part 2 of this post tomorrow.

Even without the details, it easy to understand that we want to avoid all magic in a deployment.

For scale operations, there should never be a “push and prey” step where we are counting on timing or unknown configuration for it to succeed. Likewise, we need to eliminate “it worked from my desktop” automation too.  Those systems are impossible to maintain, share and scale. Composed cluster operations addresses this problem by making work modular, predictable and transparent.

DevOps Concept: “Ready State” Infrastructure as hand-off milestone

Working for Dell, it’s no surprise that I have a lot of discussions around building up and maintaining the physical infrastructure to run a data centers at scale.  Generally the context is around OpenCrowbar, Hadoop or OpenStack Ironic/TripleO/Heat but the concerns are really universal in my cloud operations experience.

Three Teams

Typically, deployments have three distinct phases: 1) mechanically plug together the systems, 2) get the systems ready to the OS and network level andthen 3) install the application.  Often these phases are so distinct that they are handled by completely different teams!

That’s a problem because errors or unexpected changes from one phase are very expensive to address once you change teams.  The solution has been to become more and more prescriptive about what the system looks like between the second (“ready”) and third (“installed”) phase.  I’ve taken to calling this hand-off a achieving a ready state infrastructure.

I define a “ready state” infrastructure as having been configured so that the application lay down steps are simple and predictable.

In my experience, most application deployment guides start with a ready state assumption.  They read like “Step 0: rack, configure, provision and tweak the nodes and network to have this specific starting configuration.”   If you are really lucky then “specific configuration” is actually a documented and validated reference architecture.

The magic of cloud IaaS is that it always creates ready state infrastructure.  If I request 10 servers with 2 NICs running Ubuntu 14.04 then that’s exactly what I get.  The fact that cloud always provisions a ready state infrastructure has become an essential operating assumption for cloud orchestration and configuration management.

Unfortunately, hardware provisioning is messy.  It takes significant effort to configure a physical system into a ready state.  This is caused by a number of factors

  1. You can’t alter physical infrastructure with programming (an API) – for example, if the server enumerates the NICs differently than you expected, you have to adapt to that.
  2. You have to respect the physical topology of the system – for example, production deployments used teamed NICs that have to be use different switches for redundancy.  You can’t make assumptions, you have to setup the team based on the specific configuration.
  3. You have to build up the configuration in sequence – for example, you can’t setup the RAID configuration after the operating system is installed.  If you made a bad choice then you’ll likely have to repeat the whole sequence of the deployment and some bad choices (like using the wrong subnets) result in a total system rebuild.
  4. Hardware fails and is non-uniform – for example, in any order of sufficient size you will have NIC failures due to everything from simple mechanical card seating issues to BIOS interface mismatches.  Troubleshooting these issues can occupy significant time.
  5. Component configurations are interlocked – for example, a change to the switch settings could result in DHCP failures when systems are rebooted (real experience).  You cannot always work node-to-node, you must deal with the infrastructure as an integrated system.

Being consistent at turning discovered state into ready state is a complex and unique problem space.  As I explore this bare metal provisioning space in the community, I am more and more convinced that it has a distinct architecture from applications built for ready state operations.

My hope in this post is test if the concept of “ready state” infrastructure is helpful in describing the transition point between provisioning and installation.  Please let me know what you think!

What does “enable upstream recipes” mean? Not just fishing for community goodness!

One of the major Crowbar 2.0 design targets is to allow you to “upstream” operations scripts more easily.  “Upstream code” means that parts of Crowbar’s source code could be maintained in other open source repositories.  This is beyond a simple dependency (like Rails, Curl, Java or Apache): Upstreaming allows Crowbar can use code managed in the other open source repositories for more general application.  This is important because Crowbar users can leverage DevOps logic that is more broadly targeted than just Crowbar.  Even more importantly, upstreaming means that we can contribute and take advantage of community efforts to improve the upstream source.

Specifically, Crowbar maintains a set of OpenStack cookbooks that make up the core of our OpenStack deployment.  These scripts have been widely cloned (not forked) and deCrowbarized for other deployments.  Unfortunately, that means that we do not benefit from downstream improvements and the cloners cannot easily track our updates.  This happened because Crowbar was not considered a valid upstream OpenStack repository because our deployment scripts required Crowbar.  The consequence of this cloning is that incompatible OpenStack recipes have propagated like cracks in a windshield.

While there are concrete benefits to upstreaming, there are risks too.  We have to evaluate if the upstream code has been adequately tested, operates effectively, implements best practices and leverages Crowbar capabilities.  I believe strongly that untested deployment code is worse than useless; consequently, the Dell Crowbar team provides significant value by validating that our deployments work as an integrated system.  Even more importantly, we will not upstream from unmoderated sources where changes are accepted without regard for downstream impacts.  There is a significant amount of trust required for upstreaming to work.

If upstreaming is so good, why did we not start out with upstream code?  It was simply not an option at the time – Crowbar was the first (and is still!) most complete set of DevOps deployment scripts for OpenStack in a public repository.
By design, Crowbar 1.0 was tightly coupled to Opscode Chef and required users to inject Crowbar dependencies into their Chef Recipes.  This approach allowed us to more quickly integrate capabilities between recipes and with nascent Crowbar features.  Our top design requirement was that our deployment was tightly integrated between hardware, networking, operating system, operations infrastructure and the application.  Figuring out the correct place to separate concerns was impractical; consequently, we injected dependencies into our Chef code.
We have reached a point with Crowbar development that we can correctly decouple Crowbar and Chef.
The benefits to upstreaming go far beyond enabling more collaboration on OpenStack deployments.  These same changes make it easier for Crowbar to leverage community deployment scripts without one-way modifications.  If you have a working Chef Recipe then making it work with Crowbar will no longer require changes that break it outside of Crowbar; therefore, you can leverage Crowbar capabilities without losing community input and without being locked into Crowbar.

OSCON preso graphic about Upstreaming added 7/23: