Breaking Up is Hard To Do – Why I Believe Ops Decomposition (pt 1)

Over the summer, the RackN team took a radical step with our previous Ansible Kubernetes workload install: we broke it into pieces.  Why?  We wanted to eliminate all “magic happens here” steps in the deployment.

320px-dominos_fallingThe result, DR Kompos8, is a faster, leaner, transparent and parallelized installation that allows for pluggable extensions and upgrades (video tour). We also chose the operationally simplest configuration choice: Golang binaries managed by SystemDGolang binaries managed by SystemD.

Why decompose and simplify? Let’s talk about our hard earned ops automation battle scars that let to composability as a core value:

Back in the early OpenStack days, when the project was actually much simpler, we were part of a community writing Chef Cookbooks to install it. These scripts are just a sequence of programmable steps (roles in Ops-speak) that drive the configuration of services on each node in the cluster. There is an ability to find cross-cluster information and lookup local inventory so we were able to inject specific details before the process began. However, once the process started, it was pretty much like starting a dominoes chain. If anything went wrong anywhere in the installation, we had to reset all the dominoes and start over.

Like a dominoes train, it is really fun to watch when it works. Also, like dominoes, it is frustrating to set up and fix. Often we literally were holding our breath during installation hoping that we’d anticipated every variation in the software, hardware and environment. It is no surprise that the first and must critical feature we’d created was a redeploy command.

It turned out the the ability to successfully redeploy was the critical measure for success. We would not consider a deployment complete until we could wipe the systems and rebuild it automatically at least twice.

What made cluster construction so hard? There were a three key things: cross-node dependencies (linking), a lack of service configuration (services) and isolating attribute chains (configuration).

We’ll explore these three reasons in detail for part 2 of this post tomorrow.

Even without the details, it easy to understand that we want to avoid all magic in a deployment.

For scale operations, there should never be a “push and prey” step where we are counting on timing or unknown configuration for it to succeed. Likewise, we need to eliminate “it worked from my desktop” automation too.  Those systems are impossible to maintain, share and scale. Composed cluster operations addresses this problem by making work modular, predictable and transparent.

Kubernetes the NOT-so-hard way (7 RackN additions: keeping transparency, adding security)

At RackN, we take the KISS principle to heart, here are the seven ways that we worked to make Kubernetes easier to install and manage.

Container community crooner, Kelsey Hightower, created a definitive installation guide that he dubbed “Kubernetes the Hard Way” or KTHW.  In that document, he laid out a manual sequence of steps needed to bring up a working Kubernetes Cluster.  For some, the lengthy sequence served as a rally cry to simplify and streamline the “boot to kube” process with additional configuration harnesses, more bells and and some new whistles.

For the RackN team, Kelsey’s process looked like a reliable and elegant basis for automation.  So, we automated that and eliminated the hard parts (see video)

 

Seven improvements for KTHW

Our operational approach to distributed systems (encoded in Digital Rebar) drives towards keeping things simple and transparent in operation.  When creating automation, it’s way too easy to add complexity that works on a desktop for a developer, but fails as we scale or move into sustaining operations.

The benefit of Kelsey’s approach was that it had to be simple enough to reproduce and troubleshoot manually; however, there were several KTHW challenges that we wanted to streamline while we automated.

  1. Respect the manual steps: Just automating is not enough. We wanted to be true to the steps so that users of the automation could look back that the process and understand it. The beauty of KTHW is that operators can read it and understand the inner workings of Kubernetes.
  2. Node Inventory: Manual node allocation is time consuming and error prone. We believe that the process should be able to (but not require a) start from zero with just raw hardware or cloud credentials. Anything else opens up a lot of potential configuration errors.
  3. Automatic Iteration: Going back to make adjustments to previous nodes is normal in cluster building and really annoying for users. This is especially true when clusters are expanded or contracted.
  4. PKI Security: We love that Kubernetes requires TLS communication; however, we’re generally horrified about sharing around private keys and wild card certificates even for development and test clusters.
  5. Go & SystemD: We use containers for a everything in Digital Rebar and our design has a lot of RESTful services behind a reverse proxy; however, it’s simply not needed for Kubernetes. Kubernetes binary are portable Golang programs and just the API service is a web service. We feel strongly that the simplest and most robust deployment just runs these programs under SystemD. It is just as easy to curl a single file and restart a service as the doing a docker pull. In fact, it’s measurably simpler, more secure and reliable.
  6. Pluggability: It’s hard to allow variation in a manual process. With Kubernetes open ecosystem, we see a need to operators to make practical configuration choices without straying dramatically from Kelsey’s basic process. Changes to the container run time or network model should not result in radically different install steps because the fundamentals of Kubernetes are not changed by these choices.
  7. Parallel Deploys & CI/CD Deployments: When we work on cluster deploys, we spin up lots and lots of independent installs to test variations and changes like AWS and Google and OpenStack or Ubuntu and Centos.  Consequently, it is important that we can run multiple installs in parallel.  Once that works, we want to have CI driven setup, test and tear down processes.

We’re excited about the clean, fast and portable installation the came out of our efforts to automation the KTHW process. We hope that you’ll take a look at our approach and help us continue to improve and streamline Kubernetes (and other!) platform installs.

Open Source as Reality TV and Burning Data Centers [gcOnDemand podcast notes]

During the OpenStack summit, Eric Wright (@discoposse) and I talked about a wide range of topics from scoring success of OpenStack early goals to burning down traditional data centers.

Why burn down your data center (and move to public cloud)? Because your ops process are too hard to change. Rob talks about how hybrid provides a path if we can made ops more composable.

Here are my notes from the audio podcast (source):

1:30 Why “zehicle” as a handle? Portmanteau from electrics cars… zero + vehicle

Let’s talk about OpenStack & Cloud…

  • OpenStack History
    • 2:15 Rob’s OpenStack history from Dell and Hyperscale
    • 3:20 Early thoughts of a Cloud API that could be reused
    • 3:40 The practical danger of Vendor lock-in
    • 4:30 How we implemented “no main corporate owner” by choice
  • About the Open in OpenStack
    • 5:20 Rob decomposes what “open” means because there are multiple meanings
    • 6:10 Price of having all open tools for “always open” choice and process
    • 7:10 Observation that OpenStack values having open over delivering product
    • 8:15 Community is great but a trade off. We prioritize it over implementation.
  • Q: 9:10 What if we started later? Would Docker make an impact?
    • Part of challenge for OpenStack was teaching vendors & corporate consumers “how to open source”
  • Q: 10:40 Did we accomplish what we wanted from the first summit?
    • Mixed results – some things we exceeded (like growing community) while some are behind (product adoption & interoperability).
  • 13:30 Interop, Refstack and Defcore Challenges. Rob is disappointed on interop based on implementations.
  • Q: 15:00 Who completes with OpenStack?
    • There are real alternatives. APIs do not matter as much as we thought.
    • 15:50 OpenStack vendor support is powerful
  • Q: 16:20 What makes OpenStack successful?
    • Big tent confuses the ecosystem & push the goal posts out
    • “Big community” is not a good definition of success for the project.
  • 18:10 Reality TV of open source – people like watching train wrecks
  • 18:45 Hybrid is the reality for IT users
  • 20:10 We have a need to define core and focus on composability. Rob has been focused on the link between hybrid and composability.
  • 22:10 Rob’s preference is that OpenStack would be smaller. Big tent is really ecosystem projects and we want that ecosystem to be multi-cloud.

Now, about RackN, bare metal, Crowbar and Digital Rebar….

  • 23:30 (re)Intro
  • 24:30 VC market is not metal friendly even though everything runs on metal!
  • 25:00 Lack of consistency translates into lack of shared ops
  • 25:30 Crowbar was an MVP – the key is to understand what we learned from it
  • 26:00 Digital Rebar started with composability and focus on operations
  • 27:00 What is hybrid now? Not just private to public.
  • 30:00 How do we make infrastructure not matter? Multi-dimensional hybrid.
  • 31:00 Digital Rebar is orchestration for composable infrastructure.
  • Q: 31:40 Do people get it?
    • Yes. Automation is moving to hybrid devops – “ops is ops” and it should not matter if it’s cloud or metal.
  • 32:15 “I don’t want to burn down my data center” – can you bring cloud ops to my private data center?

Bugs Bunny, Prince and Enabling True Hybrid Infrastructure Consumption

OK- Stay with me on this. I’m drawing parallels again.  🙂

Like many from my generation, my initial exposure to classical music and opera was derived from Bugs Bunny on Saturday mornings (culturally deprived, I know). One of the cartoons I remember well was with Bugs trying to get even with the heavy-set opera singer who disrupts Bugs’ banjo playing. In order to exact his revenge, Bugs infiltrates the opera singer’s concert by impersonating the famous long-hared (hared…get it?) conductor, Leopold Stokowski. He proceeds to force the tenor to hit octaves that structurally compromise the amphitheater and as it crumbles leaves him bruised and battered. Bugs is as always, victorious.

bugs

In examining Bugs’ strategy (let’s assume he actually had one), Bugs took over operations of the orchestra’s musical program to achieve his goal of getting the tenor “in-line” so to speak. As I prepare to head down to the OpenStack Conference in Austin, TX next week, I’m seeing similar patterns develop in the cloud and data center infrastructure space which are very “Bugs/Leopold-like”. With organizations deciding on how to consolidate data centers, containerize apps and move to the cloud, vendors and open source technologies offer value, however true operational, infrastructure and platform independence are not what they appear to be. For example, once you move your apps off the data center to AWS or VMware and then later determine you are paying too much or the workload is no longer is appropriate for the infrastructure, good luck replicating the configuration work done on CloudFormation on another cloud or back in the data center. Same rationale is applicable to other technologies such as converged infrastructure and proprietary private cloud platforms. As the customer, to achieve scale and remove operational pain you must fall in line. That in itself is a big commitment to make in a still-evolving and maturing technology industry and a dynamic business climate.

On an unrelated topic, I was saddened to learn of the passing of Prince this past week. While not a die-hard fan, I liked his music. He was a great composer of songs and had a style all to his own. Beyond his music and sheer talent, I admired his business beliefs and deep desire to maintain creative ownership and control of his music and his brand.

princeDespite his fortune and fame, there was a period in the middle of Prince’s career in which he felt creatively and financially locked-in by the big record companies. Once Prince (and the unpronounceable symbol) broke away from Warner Music, he was able to produce music under his own label. This action enabled him to create music without a major record label dictating when he needed to produce a new album and what it needed to sound like. In addition, he was now able to market his new recordings to the distribution platform that supported his artistic and financial goals. While still having ties to Warner Music, he was no longer bound by their business practices. Along with starting his own music subscription service, Prince cut deals with Arista, Columbia, iTunes and Sony. Prince’s music production had operational portability, business agility and choice (seven Grammy awards and 100 million record sales also help create that kind of leverage.).

While open APIs and containers offer some portability, at RackN we believe they do not offer a completely free market experience to the cloud and infrastructure consumer. If the business decides it is paying too much for AWS, it should not allow for the operational underlay and configuration complexity to lock them to the infrastructure provider. They should be able to transfer their business to Google, Azure, Rackspace or Dreamhost with ease. We believe technologies that create portable, composable operational workflows drive true infrastructure and platform independence and as a benefit, reduces business risk. Choosing a platform and being forced to use it are two very different things.

In conclusion, when considering moving workloads to the cloud, converged infrastructure platforms or using DevOps automation tools, consider how you can achieve programmable operational portability and agility. Think about how you can best absorb new technologies without causing operational disruption in your infrastructure. Furthermore, ensure you can accomplish this in a repeatable, automated fashion. Analyze how you can abstract away complex configurations for security, networking and container orchestration technologies and make them adaptable from one infrastructure platform to another. Attempt to eliminate configuration versioning as much as possible and make upgrades simplistic and automated so your DevOps staff does not have to be experts (they are stressed out enough.).

If you are attending the OpenStack Conference this week, look me up. While I am far from a music expert, i’ll be happy to share with you my insights on how to spot a technology vendor that likes to play a purple guitar as opposed to one that eats carrots and plays the banjo.

-Dan Choquette: Co-Founder, RackN

 

 

 

Hybrid & Container Disruption [Notes from CTP Mike Kavis’ Interview]

Last week, Cloud Technology Partner VP Mike Kavis (aka MadGreek65) and I talked for 30 minutes about current trends in Hybrid Infrastructure and Containers.

leadership-photos-mike

Mike Kavis

Three of the top questions that we discussed were:

  1. Why Composability is required for deployment?  [5:45]
  2. Is Configuration Management dead? [10:15]
  3. How can containers be more secure than VMs? [23:30]

Here’s the audio matching the time stamps in my notes:

  • 00:44: What is RackN? – scale data center operations automation
  • 01:45: Digital Rebar is… 3rd generation provisioning to manage data center ops & bring up
  • 02:30: Customers were struggling on Ops more than code or hardware
  • 04:00: Rethinking “open” to include user choice of infrastructure, not just if the code is open source.
  • 05:00: Use platforms where it’s right for users.
  • 05:45: Composability – it’s how do we deal with complexity. Hybrid DevOps
  • 06:40: How do we may Ops more portable
  • 07:00: Five components of Hybrid DevOps
  • 07:27: Rob has “Rick Perry” Moment…
  • 08:30: 80/20 Rule for DevOps where 20% is mixed.
  • 10:15: “Is configuration management dead” > Docker does hurt Configuration Management
  • 11:00: How Service Registry can replace Configuration.
  • 11:40: Reference to John Willis on the importance of sequence.
  • 12:30: Importance of Sequence, Services & Configuration working together
  • 12:50: Digital Rebar intermixes all three
  • 13:30: The race to have orchestration – “it’s always been there”
  • 14:30: Rightscale Report > Enterprises average SIX platforms in use
  • 15:30: Fidelity Gap – Why everyone will hybrid but need to avoid monoliths
  • 16:50: Avoid hybrid trap and keep a level of abstraction
  • 17:41: You have to pay some “abstraction tax” if you want to hybrid BUT you can get some additional benefits: hybrid + ops management.
  • 18:00: Rob gives a shout out to Rightscale
  • 19:20: Rushing to solutions does not create secure and sustained delivery
  • 20:40: If you work in a silo, you loose the ability to collaborate and reuse other works
  • 21:05: Rob is sad about “OpenStack explosion of installers”
  • 21:45: Container benefit from services containers – how they can be MORE SECURE
  • 23:00: Automation required for security
  • 23:30: How containers will be more secure than containers
  • 24:30: Rob bring up “cheese” again…
  • 26:15: If you have more situationalleadership-photos-mike awareness, you can be more secure WITHOUT putting more work for developers.
  • 27:00: Containers can help developers worry about as many aspects of Ops
  • 27:45: Wrap up

What do you think?  I’d love to hear your opinion on these topics!

We need DevOps without Borders! Is that “Hybrid DevOps?”

The RackN team has been working on making DevOps more portable for over five years.  Portable between vendors, sites, tools and operating systems means that our automation needs be to hybrid in multiple dimensions by design.

Why drive for hybrid?  It’s about giving users control.

launch!I believe that application should drive the infrastructure, not the reverse.  I’ve heard may times that the “infrastructure should be invisible to the user.”  Unfortunately, lack of abstraction and composibility make it difficult to code across platforms.  I like the term “fidelity gap” to describe the cost of these differences.

What keeps DevOps from going hybrid?  Shortcuts related to platform entangled configuration management.

Everyone wants to get stuff done quickly; however, we make the same hard-coded ops choices over and over again.  Big bang configuration automation that embeds sequence assumptions into the script is not just technical debt, it’s fragile and difficult to upgrade or maintain.  The problem is not configuration management (that’s a critical component!), it’s the lack of system level tooling that forces us to overload the configuration tools.

What is system level tooling?  It’s integrating automation that expands beyond configuration into managing sequence (aka orchestration), service orientation, script modularity (aka composibility) and multi-platform abstraction (aka hybrid).

My ops automation experience says that these four factors must be solved together because they are interconnected.

What would a platform that embraced all these ideas look like?  Here is what we’ve been working towards with Digital Rebar at RackN:

Mono-Infrastructure IT “Hybrid DevOps”
Locked into a single platform Portable between sites and infrastructures with layered ops abstractions.
Limited interop between tools Adaptive to mix and match best-for-job tools.  Use the right scripting for the job at hand and never force migrate working automation.
Ad hoc security based on site specifics Secure using repeatable automated processes.  We fail at security when things get too complex change and adapt.
Difficult to reuse ops tools Composable Modules enable Ops Pipelines.  We have to be able to interchange parts of our deployments for collaboration and upgrades.
Fragile Configuration Management Service Oriented simplifies API integration.  The number of APIs and services is increasing.  Configuration management is not sufficient.
 Big bang: configure then deploy scripting Orchestrated action is critical because sequence matters.  Building a cluster requires sequential (often iterative) operations between nodes in the system.  We cannot build robust deployments without ongoing control over order of operations.

Should we call this “Hybrid Devops?”  That sounds so buzz-wordy!

I’ve come to believe that Hybrid DevOps is the right name.  More technical descriptions like “composable ops” or “service oriented devops” or “cross-platform orchestration” just don’t capture the real value.  All these names fail to capture the portability and multi-system flavor that drives the need for user control of hybrid in multiple dimensions.

Simply put, we need devops without borders!

What do you think?  Do you have a better term?

Smaller Nodes? Just the Right Size for Docker!

Container workloads have the potential to redefine how we think about scale and hosted infrastructure.

Last Fall, Ubiquity Hosting and RackN announced a 200 node Docker Swarm cluster as a phase one of our collaboration. Unlike cloud-based container workloads demonstrations, we chose to run this cluster directly on the bare metal.  

Why bare metal instead of virtualized? We believe that metal offers additional performance, availability and control.  

With the cluster automation ready, we’re looking for customers to help us prove those assumptions. While we could simply build on many VMs, our analysis is the a lot of smaller nodes will distribute work more efficiently. Since there is no virtualization overhead, lower RAM systems can still give great performance.

The collaboration with RackN allows us to offer customers a rapid, repeatable cluster capability. Their Digital Rebar automation works on a broad spectrum of infrastructure allow our users to rehearse deployments on cloud, quickly change components and iteratively tune the cluster.

We’re finding that these dedicated metal nodes have much better performance than similar VMs in AWS?  Don’t believe us – you can use Digital Rebar to spin up both and compare.   Since Digital Rebar is an open source platform, you can explore and expand on it.

The Docker Swarm deployment is just a starting point for us. We want to hear your provisioning ideas and work to turn them into reality.

2015 Container Review

It’s been a banner year for container awareness and adoption so we wanted to recap 2015.  For RackN, container acceleration is near to our heart because we both enable and use them in fundamental ways.   Look for Rob’s 2016 predictions on his blog.

The RackN team has truly deep and broad experience with containers in practical use.  In the summer, we delivered multiple container orchestration workloads including Docker Swarm, Kubernetes, Cloud Foundry, StackEngine and others.  In the fall, we refactored Digital Rebar to use Docker Compose with dramatic results.  And we’ve been using Docker since 2013 (yes, “way back”) for ops provisioning and development.

To make it easier to review that experience, we are consolidating a list of our container related posts for 2015.

General Container Commentary

RackN & Digital Rebar Related

My OpenStack 2016 Analysis: Continue Core, Stop Confusing Ecosystem, Change Hybrid Approach

Note: I’ve served on the OpenStack Foundation board since its formation.  There I’ve led the “define the core” DefCore efforts.  I’m on the 2016 ballot for another term.

I love using end-of-year posts to reflect (2015, I got 6 of 7!) and try to set direction (OpenStack needed to prioritize).  This year, I wanted to use a simple “Continue, Stop, Change” format that I’ve used for employee reviews in the past.  These three items reflect how I think OpenStack needs to respond to the industry in 206.

Continue: Focus on Core

OpenStack adoption continues around the legacy projects that traditionally define it for most users.  A lot of work and focus is needed around those projects including better representation of user, operator and product interests.

Towards that end, we’ve made amazing progress on DefCore implementation and I’m excited about the discussions that it’s been generating.  It’s driving pragmatic decisions about what is required (running a vm?) and how to verify compliance.  It’s also driving conceptual thinking around OpenStack principles and ecosystem priorities.

DefCore’s focus on using community tests to define OpenStack creates a very concrete and defensible standard.  Ultimately, it comes back to users and operators demanding compliance for the work to remain meaningful.

Overall, To focus on core function, OpenStack needs to empower new groups within the community.  Expanding the role of the Product Group, Operators, and User Committee are key to giving a voice to these constituents.

OpenStack core must transition into a consistent platform or it risks becoming irrelevant.

Stop: Confusing The Ecosystem

I’m concerned about the “big tent” governance change puts OpenStack into conflict with both community vendors and the larger cloud market.  I believe we’re creating an echo chamber of OpenStack on OpenStack focus that forces adjacent efforts (like software defined network, storage and container orchestration) to be either inside or outside the community circle.  While that artificially grows the apparent contributor base, it creates artificial walls between OpenStack and the dominate cloud platforms.

Let me illustrate using my own company, RackN.  We create cross-platform devops orchestration based on an open source project, Digital Rebar.  We consider ourselves to be part of the OpenStack community and have supported deploying the core.  We also provision bare metal and deploy Kubernetes, Docker Swarm and Cloud Foundry.  That has apparent conflicts with big tent Ironic and Magnum projects.  Does that make RackN competitive with OpenStack or not?

It hurts OpenStack when competitive alignment is unclear because vendors, users and operators are uncertain about where to make investments.  In the end, users will choose simpler alternatives.

I believe the Board needs to define the OpenStack ecosystem strategy in a clear and actionable way.  If re-elected, that will be my Board priority for 2016.

Change: Hybrid Approach

My top 2016 prediction (post coming) is that we accept “hybrid IT as the new normal.”  That means that we stop driving towards an IT mono-culture and start working towards tools that embrace heterogeneity.  Along those lines, OpenStack needs to evaluate our relative position and strengths in a hybrid cloud landscape.

Interoperability between OpenStack implementations is important because it reduces friction; however, we need to expand our thinking to ensure interoperability with other platforms.  That does not mean simply cloning the AWS APIs!  It means that we need to consider users and operator needs against a spectrum of private and public infrastructures.

A broader hybrid approach also suggests that duplicating cloud-locked adjacent services (e.g. Cloud Formation vs. Heat) does not address user needs.

I am advocating that OpenStack encourage a cloud-neutral ecosystem, outside of the OpenStack tent, that work across a wide range of platforms.  That leads to user choice and creates a truly open platform. 

And, of course, more Community Discussion!

I want to thank the many people who participated in a heated twitter discussion in advance of this post.  There are many great ideas and counter-points covered in that lengthy dialog.

Do you have an opinion about what to OpenStack should stop, accelerate or change?  I’d love to hear it!

¡Sí, Sí! That’s a Two Hundred Node Metal Docker Swarm Deployment

Today, RackN and Ubiquity Hosting announced a 200 node Docker Swarm deployment on hosted bare metal.

Leveraging the current Digital Rebar core and the RackN Swarm workload, this reference deployment was automatically configured using the same components that also work on a desktop VM deployment. That high fidelity deployment allows operators to start learning quickly on small systems then grow to AWS and if warranted, potentially smoothly transition to scale metal.

This deployment represents RackN starting a new chapter with Digital Rebar because it demonstrates a commitment deploy on any infrastructure: cloud, metal or something in between.

The RackN team started this journey with a “composable ops” vision that allows operators to mix and match. That spans both vendor physical resources and software components such as operating systems, software defined networking and platforms. In the 200 node Swarm cluster, physical infrastructure is provisioned by Ubiquity Hosting not Digital Rebar or RackN.  Historically, RackN focused on private infrastructure.  Now, users get the option of best-in-class metal deployment without having to own the infrastructure.

We experienced the futility of making Ops homogeneous and declared defeat.

Accepting that each data center has individual Ops was pivotal. Digital Rebar embraces heterogeneity at the most fundamental architectural level. Our system approach and unique composable abstractions allow users to make deployments portable between any infrastructure with existing tooling and operational processes. Portability means that we can both eliminate the fidelity gap as we scale and between deployments.

When multiple scales and sites can share deployment automation, we can finally work together on addressing critical operational issues like scale, high availability and upgrade

This 200 node deployment demonstrates more than scale and the deployment of the latest Docker technology. It is a milestone on the path toward sharable production operations.