Hybrid DevOps: Union of Configuration, Orchestration and Composability

Steven Spector and I talked about “Hybrid DevOps” as a concept.  Our discussion led to a ‘there’s a picture for that!’ moment that often helped clarify the concept.  We believe that this concept, like Rugged DevOps, is additive to existing DevOps thinking and culture.  It’s about expanding our thinking to include orchestration and composability.

Hybrid DevOps 3 components (1)Here’s our write-up: Hybrid DevOps: Union of Configuration, Orchestration and Composability

Is Hybrid DevOps Like The Tokyo Metro?

I LOVE OPS ANALOGIES!  The “Hybrid DevOps = Tokyo Metro” really works because it accepts that some complexity is inescapable.  It would be great if Tokyo was a single system, but it’s not.  Cloud and infrastructure are the same – they are not a single vendor system and going to converge.

With that intro…Dan Choquette writes how DevOps at scale like a major city’s subway system? Both require strict processes and operational excellence to move a lot of different parts at once. How else? If you had …

Source: Is Hybrid DevOps Like The Tokyo Metro?

Is Hybrid DevOps Like The Tokyo Metro?

By Dan Choquette

Is DevOps at scale like a major city’s subway system? Both require strict processes and operational excellence to move a lot of different parts at once. How else?

If you had the pleasure of riding the Tokyo Metro, you might agree that it’s an interesting – and confusing experience (especially if you need to change lines!) All totaled, there are 9 lines, roughly 180+ stations with a daily ridership of almost 7 million people!

tokyo subway

A few days ago, I had a conversation with a potential user deploying Kubernetes with Contrail networking on Google Cloud repeatedly in a build/test/dev scenario. The other conversation was around the need to provision thousands of x86 bare metal servers once to twice a week with different configurations and networking with the need to ultimately control their metal as they would a cloud instance in AWS. Cool stuff!

Since we here at RackN believe Hybrid DevOps is a MUST for Hybrid IT (after all, we are a start-up and have bet our very lives on such a thing so we REALLY believe it!) I thought about how Hybrid DevOps compares to the Tokyo Metro (earlier that day I read about Tokyo in the news and the mind wandered). In my attempt to draw the parallel, below is an SDLC DevOps framework that you have seen 233 times before in blogs like this one.

devops

In terms of process, I’m sure you can notice how similar it is to the Metro, right?

<crickets>

<more crickets>

When both operate as they should, they are the epitome of automation, control, repeat-ability and reliability. In disciplined, automated at-scale DevOps environments it does have some similarity to the Ginza or Tozai line. You have different people (think apps) of all walks of life boarding a train needing to get somewhere and need to follow steps in a process (maybe the “Pusher” is the scrum or DevOps governance tool but we’ll leave that determination for the end). However, as I compare it to Hybrid DevOps, the Tokyo Metro is not hybrid-tolerant. With subways, if a new subway car is added, tracks are changed, or a new station is added instantaneously to better handle congestion everything stops or turns into a logistical disaster. In addition, there is no way of testing how it will all flow before hand. There will be operational glitches and millions of angry customers will not reach their destination in a timely fashion- or at all.

The same is metaphorically true for Hybrid DevOps in Hybrid IT. In theory, the Hybrid DevOps pipeline includes build/test/dev and continuous integration/deployment for all platforms, business models, governance models, security and software stacks in which are dependent with the physical/IaaS/container underlay. Developers and operators need to test against multiple platforms (cloud, VM and metal) and in order to realize value, assimilate into production rapidly while at the same time frequently adjusting to changes of all kinds. They also require the ability to layer multiple technologies and security policies into an operational pipeline which in turn has hundreds of moving parts which require precise configuration settings to be made in a sequenced, orchestrated manner.

At RackN, in order to continuously test, integrate, deploy and manage complex technologies in a Hybrid IT scenario is critical to a successful adoption in production. The most optimal way to accomplish that is to have in place a central platform than can govern Hybrid DevOps at scale that can automate, orchestrate and compose allthe necessary configurations and components in a sequenced fashion. Without one, hap-hazard assembly and lack of governance erodes the overall process and leads to failure. Just like the “Pusher” on the platform, without governance both the Tokyo Metro and a Hybrid DevOps model at scale being used for a Hybrid IT use case leads to massive delays, dissatisfied customers and chaos.

pusher

 

 

Composability & Commerce: drivers for #CloudMinds Hybrid discussion

Last night, I had the privilege of being included in an IBM think tank group called CloudMinds.  The topic for the night was accelerating hybrid cloud.cb81gdhukaetyga

During discussion, I felt that key how and why aspects of hybrid computing emerged: composability and commerce.

Composability, the discipline of creating segmenting IT into isolated parts, was considered a primary need.  Without composability, we create vertically integrated solutions that are difficult to hybrid.

Commerce, the acknowledgement that we are building technology to solve problems, was considered a way to combat the dogma that seems to creep into the platform wars.  That seems obvious, yet I believe it’s often overlooked and the group seemed to agree.

It’s also worth adding that the group strongly felt that hybrid was not a cloud discussion – it was a technology discussion.  It is a description of how to maintain an innovative and disruptive industry by embracing change.

The purpose of the think tank is to create seeds of an ongoing discussion.  We’d love to get your perspective on this too.

Hybrid & Container Disruption [Notes from CTP Mike Kavis’ Interview]

Last week, Cloud Technology Partner VP Mike Kavis (aka MadGreek65) and I talked for 30 minutes about current trends in Hybrid Infrastructure and Containers.

leadership-photos-mike

Mike Kavis

Three of the top questions that we discussed were:

  1. Why Composability is required for deployment?  [5:45]
  2. Is Configuration Management dead? [10:15]
  3. How can containers be more secure than VMs? [23:30]

Here’s the audio matching the time stamps in my notes:

  • 00:44: What is RackN? – scale data center operations automation
  • 01:45: Digital Rebar is… 3rd generation provisioning to manage data center ops & bring up
  • 02:30: Customers were struggling on Ops more than code or hardware
  • 04:00: Rethinking “open” to include user choice of infrastructure, not just if the code is open source.
  • 05:00: Use platforms where it’s right for users.
  • 05:45: Composability – it’s how do we deal with complexity. Hybrid DevOps
  • 06:40: How do we may Ops more portable
  • 07:00: Five components of Hybrid DevOps
  • 07:27: Rob has “Rick Perry” Moment…
  • 08:30: 80/20 Rule for DevOps where 20% is mixed.
  • 10:15: “Is configuration management dead” > Docker does hurt Configuration Management
  • 11:00: How Service Registry can replace Configuration.
  • 11:40: Reference to John Willis on the importance of sequence.
  • 12:30: Importance of Sequence, Services & Configuration working together
  • 12:50: Digital Rebar intermixes all three
  • 13:30: The race to have orchestration – “it’s always been there”
  • 14:30: Rightscale Report > Enterprises average SIX platforms in use
  • 15:30: Fidelity Gap – Why everyone will hybrid but need to avoid monoliths
  • 16:50: Avoid hybrid trap and keep a level of abstraction
  • 17:41: You have to pay some “abstraction tax” if you want to hybrid BUT you can get some additional benefits: hybrid + ops management.
  • 18:00: Rob gives a shout out to Rightscale
  • 19:20: Rushing to solutions does not create secure and sustained delivery
  • 20:40: If you work in a silo, you loose the ability to collaborate and reuse other works
  • 21:05: Rob is sad about “OpenStack explosion of installers”
  • 21:45: Container benefit from services containers – how they can be MORE SECURE
  • 23:00: Automation required for security
  • 23:30: How containers will be more secure than containers
  • 24:30: Rob bring up “cheese” again…
  • 26:15: If you have more situationalleadership-photos-mike awareness, you can be more secure WITHOUT putting more work for developers.
  • 27:00: Containers can help developers worry about as many aspects of Ops
  • 27:45: Wrap up

What do you think?  I’d love to hear your opinion on these topics!

Composability is Critical in DevOps: let’s break the monoliths

This post was inspired by my DevOps.com Git for DevOps post and is an evolution of my “Functional Ops (the cake is a lie)” talks.

git_logo2016 is the year we break down the monoliths.  We’ve spent a lot of time talking about monolithic applications and microservices; however, there’s an equally deep challenge in ops automation.

Anti-monolith composability means making our automation into function blocks that can be chained together by orchestration.

What is going wrong?  We’re building fragile tightly coupled automation.

Most of the automation scripts that I’ve worked with become very long interconnected sequences well beyond the actual application that they are trying to install.  For example, Kubernetes needs etcd as a datastore.  The current model is to include the etcd install in the install script.  The same is true for SDN install/configuation and post-install test and dashboard UIs.  The simple “install Kubernetes” quickly explodes into a kitchen sink of related adjacent components.

Those installs quickly become fragile and bloated.  Even worse, they have hidden dependencies.  What happens when etcd changes.  Now, we’ve got to track down all the references to it burried in etcd based applications.  Further, we don’t get the benefits of etcd deployment improvements like secure or scale configuration.

What can we do about it?  Resist the urge to create vertical silos.

It’s temping and fast to create automation that works in a very prescriptive way for a single platform, operating system and tool chain.  The work of creating abstractions between configuration steps seems like a lot of overhead.  Even if you create those boundaries or reuse upstream automation, you’re likely to be vulnerable to changes within that component.  All these concerns drive operators to walk away from working collaboratively with each other and with developers.

Giving up on collaborative Ops hurts us all and makes it impossible to engineer excellent operational tools.  

Don’t give up!  Like git for development, we can do this together.

Kubernetes 18+ ways – yes, you can have it your way

By Rob Hirschfeld

Lately, I’ve been talking about the general concept of hybrid DevOps adding composability, orchestration and services to traditional configuration. It’s time add a concrete example because the RackN team is deliving it with Digital Rebar and Kubernetes.

So far, we enabled a single open platform to install over 18 different configurations of Kubernetes simply by changing command line flags [videos below].

By taking advantage of the Digital Rebar underlay abstractions and orchestration, we are able to use open community installation playbooks for a wide range of configurations.

So far, we’re testing against:

  • Three different clouds (AWS, Google and Packet) not including the option of using bare metal.
  • Two different operating systems (Ubuntu and Centos)
  • Three different software defined networking systems (Flannel, Calico and OpenContrail)

Those 18 are just the tip of the iceberg that we are actively testing. The actual matrix is much deeper.

BUT THAT’S AN EXPLODING TEST MATRIX!?! No. It’s not.

The composable architecture of Digital Rebar means that all of these variations are isolated. We are not creating 18 distinct variations; instead, the system chains options together and abstracts the differences between steps.

That means that we could add different logging options, test sequences or configuration choices into the deployment with minimal coupling of previous steps. This enables operator choice and vendor injection in a way to allows collaboration around common components. By design, we’ve eliminated fragile installation monoliths.

All it takes is a Packet, AWS or Google account to try this out for yourself!

Kubernetes 18+ ways – yes, you can have it your way

Lately, I’ve been talking about the general concept of hybrid DevOps adding composability, orchestration and services to traditional configuration.  It’s time add a concrete example: the RackN team using Hybrid DevOps to deliver Kubernetes via Digital Rebar using community tools and scripts.

64671543So far, we provision over 18 different configurations of Kubernetes simply by changing command line flags [videos below].  That does not include optional post install steps like tests and applications.

By taking advantage of the Digital Rebar underlay abstractions and orchestration, we are able to use open community installation playbooks for a wide range of configurations.

So far, we’re testing against:

  • Three different clouds (AWS, Google and Packet.net) not including the option of using bare metal.
  • Two different operating systems (Ubuntu and Centos)
  • Three different software defined networking systems (Flannel, Calico and OpenContrail)

Those 18 are just the tip of the iceberg that we are actively testing.  The actual matrix is much deeper.

BUT THAT’S AN EXPLODING TEST MATRIX!?!  No.  It’s not.

The composable architecture of Digital Rebar means that all of these variations are isolated.  We are not creating 18 distinct variations; instead, the system chains options together and abstracts the differences between steps.

That means that we could add different logging options, test sequences or configuration choices into the deployment with minimal coupling of previous steps.  This enables operator choice and vendor injection in a way to allows collaboration around common components.  By design, we’ve eliminated fragile installation monoliths.

All it takes is a Packet, AWS or Google account to try this out for yourself!

Using the CLI script to install Kubernetes:

Deep Dive into adding OpenContrail SDN:

Docker Swarm Cluster Ops – focus on using, not building, with standard automation

At RackN, we’re huge fans of Docker.  We’ve been using the engine for years (since v0.8!) and you can read about our lessions from when we rearchitected around Docker Compose.  Now we’ve built  “one-click” hybrid cluster automation for Kubernetes, Docker Swarm and others.

However, I’m concerned that Docker installs reveal a lack of cluster operation focus.  These platforms are evolving very rapidly exposing users to both breaking upgrades and security risks.  This drives a requirement for cluster automation.

What are cluster operations?  It is the system level activity of creating an integrated platform that is repeatable, secure, networked and sustainable.  As use of Docker transitions from single node activity into multi-node and hybrid clusters, we need to approach the install and configuration as a system activity.

Cluster configuration requires system activity because there are so many moving pieces and necessary pre-configurations of networking, security, storage and roles.  These choices need to be implemented before the actual cluster software is installed because they drive how the cluster is configured and managed.  They continue to be a major factor as we grow, shrink and ugprade the cluster.

Why don’t people do this already?  Because cluster configuration requires additional setup and planning.  Operators are struggling just to keep up with API changes between quarterly updates.

Our mission is to eliminate the overhead of cluster operations so you can focus on using the Cluster not building it.

The RackN team has been working on deployment of Docker Swarm (and container orchestration more generally) to make sure Cluster Operations and underlay are robust and automated on every platform from cloud to metal.

The video below (and others in my channel) show how we’ve made it “one click” easy to create container clusters in nearly any environment.  While this is an evolving process, we believe that it is critical to start with cluster automation.

Let us show you how we’ve made that both fast and painless.

 

Cloudcast Notes: “doing real work” in containers and cloud

cloudcast-logo-2015-banner-blue

Last week, I made my second apperance on the Brian Gracely & Aaron Delp’s excellent podcast, The Cloudcast.   A lot has changed since my first appearance in 2011 but we’re still struggling to create consistent operations around these new platforms.  Then it was OpenStack, today it is container orchestration.

I loved this closing comment from Brian, “[the Cloudcast] loves people who are down in the dirt… you are living it and it’s going into the product.”

Total time, 38 minutes

  • 02:15: Interview Starts
  • 03:15: RackN path to Digital Rebar.  History of team going back to Crowbar
  • 06:05: Why we moved to containers for Digital Rebar (blog details here)
  • 07:20: The process to transform from monolith to services
  • 07:50: As backgrouind, what is Digital Rebar?  Configuration & Services in Sequence.
  • 09:30: How/Why to use Consul for services
  • 10:30: Why Immutability is Hard  (technial use of the word “cheese”)
  • 12:20: Challenge of restarting & state in Microservices
  • 14:30: Need for Iterative design process to improve as you learn the pattern.
  • 15:10: “If you are not using containers for at least packaging, your are crazy”
  • 15:40: We choose not to talk about OpenStack!
  • 16:15: Fidelity Gap and cloud portability
  • 17:10: Rob does funny voice about idea that with containers “devs don’t have to do ops”
  • 18:00: Why adding some overhead for deveopers is a good investment.
  • 18:40: Rob throws OpenStack under the bus for Devstack and “it worked in Devstack” mentality
  • 21:20: Containers do not solve all problems, in some ways they make things harder (especially on networking)/
  • 21:55: “we are about to put a serious hurt on networking management”
  • 21:50: Networking configuration is hard to build in a consisent way.   You have to automated it – there is no other choice.
  • 24:20: Hybrid Cloud priorities with RackN
  • 25:00: We “declared default” on trying to create a mono-cloud and accepted that infrastructure is hybrid.
  • 25:40: Openness comes from having multiple providers.  Composable ops allows you to cope with heterogeneous APIs
  • 28:55: Businesses want choice and control about infrastructure.  Do not want to deployments to hardcode to platforms or tooling.
  • 29:30: “I have not met anyone who is just using one cloud, tool or platform”
  • 30:30: Brian asks Rob to pick winners and trends.  “we like to let people pick and choose.”
  • 31:00: Container orchestration with networking and storage are going to be huge.
  • 31:30: Rob compares Kubernetes, Docker, Mesos, Rancher and Cloudsoft.
  • 32:20: The importance of adjacencies.  Things you need to make the core stuff work.
  • 34:20: “Watch out for the adjacencies because they will slow you down.”
  • 36:10: “We love guests who live in the dirt” and “built the technology that they wanted to get their jobs”