Composability is Critical in DevOps: let’s break the monoliths

This post was inspired by my DevOps.com Git for DevOps post and is an evolution of my “Functional Ops (the cake is a lie)” talks.

git_logo2016 is the year we break down the monoliths.  We’ve spent a lot of time talking about monolithic applications and microservices; however, there’s an equally deep challenge in ops automation.

Anti-monolith composability means making our automation into function blocks that can be chained together by orchestration.

What is going wrong?  We’re building fragile tightly coupled automation.

Most of the automation scripts that I’ve worked with become very long interconnected sequences well beyond the actual application that they are trying to install.  For example, Kubernetes needs etcd as a datastore.  The current model is to include the etcd install in the install script.  The same is true for SDN install/configuation and post-install test and dashboard UIs.  The simple “install Kubernetes” quickly explodes into a kitchen sink of related adjacent components.

Those installs quickly become fragile and bloated.  Even worse, they have hidden dependencies.  What happens when etcd changes.  Now, we’ve got to track down all the references to it burried in etcd based applications.  Further, we don’t get the benefits of etcd deployment improvements like secure or scale configuration.

What can we do about it?  Resist the urge to create vertical silos.

It’s temping and fast to create automation that works in a very prescriptive way for a single platform, operating system and tool chain.  The work of creating abstractions between configuration steps seems like a lot of overhead.  Even if you create those boundaries or reuse upstream automation, you’re likely to be vulnerable to changes within that component.  All these concerns drive operators to walk away from working collaboratively with each other and with developers.

Giving up on collaborative Ops hurts us all and makes it impossible to engineer excellent operational tools.  

Don’t give up!  Like git for development, we can do this together.

DevOps workers, you mother was right: always bring a clean Underlay.

Why did your mom care about underwear? She wanted you to have good hygiene. What is good Ops hygiene? It’s not as simple as keeping up with the laundry, but the idea is similar. It means that we’re not going to get surprised by something in our environment that we’d taken for granted. It means that we have a fundamental level of control to keep clean. Let’s explore this in context.

l_1600_1200_9847591C-0837-4A7D-A69D-54041685E1C6.jpegI’ve struggled with the term “underlay” for infrastructure of a long time. At RackN, we generally prefer the term “ready state” to describe getting systems prepared for install; however, underlay fits very well when we consider it as the foundation for a more building up a platform like Kubernetes, Docker Swarm, Ceph and OpenStack. Even more than single operator applications, these community built platforms require carefully tuned and configured environments. In my experience, getting the underlay right dramatically reduces installation challenges of the platform.

What goes into a clean underlay? All your infrastructure and most of your configuration.

Just buying servers (or cloud instances) does not make a platform. Cloud underlay is nearly as complex, but let’s assume metal here. To turn nodes into a cluster, you need setup their RAID and BIOS. Generally, you’ll also need to configure out-of-band management IPs and security. Those RAID and BIOS settings specific to the function of each node, so you’d better get that right. Then install the operating system. That will need access keys, IP addresses, names, NTP, DNS and proxy configuration just as a start. Before you connect to the wide, make sure to update to your a local mirror and site specific requirements. Installing Docker or a SDN layer? You may have to patch your kernel. It’s already overwhelming and we have not even gotten to the platform specific details!

Buried in this long sequence of configurations are critical details about your network, storage and environment.

Any mistake here and your install goes off the rails. Imagine that your building a house: it’s very expensive to change the plumbing lines once the foundation is poured. Thankfully, software configuration is not concrete but the costs of dealing with bad setup is just as frustrating.

The underlay is the foundation of your install. It needs to be automated and robust.

The challenge compounds once an installation is already in progress because adding the application changes the underlay. When (not if) you make a deploy mistake, you’ll have to either reset the environment or make your deployment idempotent (meaning, able to run the same script multiple times safely). Really, you need to do both.

Why do you need both fast resets and component idempotency? They each help you troubleshoot issues but in different ways. Fast resets ensure that you understand the environment your application requires. Post install tweaks can mask systemic problems that will only be exposed under load. Idempotent action allows you to quickly iterate over individual steps to optimize and isolate components. Together they create resilient automation and good hygiene.

In my experience, the best deployments involved a non-recoverable/destructive performance test followed by a completely fresh install to reset the environment. The Ops equivalent of a full dress rehearsal to flush out issues. I’ve seen similar concepts promoted around the Netflix Chaos Monkey pattern.

If your deployment is too fragile to risk breaking in development and test then you’re signing up for an on-going life of fire fighting. In that case, you’ll definitely need all the “clean underware” you can find.

We need DevOps without Borders! Is that “Hybrid DevOps?”

The RackN team has been working on making DevOps more portable for over five years.  Portable between vendors, sites, tools and operating systems means that our automation needs be to hybrid in multiple dimensions by design.

Why drive for hybrid?  It’s about giving users control.

launch!I believe that application should drive the infrastructure, not the reverse.  I’ve heard may times that the “infrastructure should be invisible to the user.”  Unfortunately, lack of abstraction and composibility make it difficult to code across platforms.  I like the term “fidelity gap” to describe the cost of these differences.

What keeps DevOps from going hybrid?  Shortcuts related to platform entangled configuration management.

Everyone wants to get stuff done quickly; however, we make the same hard-coded ops choices over and over again.  Big bang configuration automation that embeds sequence assumptions into the script is not just technical debt, it’s fragile and difficult to upgrade or maintain.  The problem is not configuration management (that’s a critical component!), it’s the lack of system level tooling that forces us to overload the configuration tools.

What is system level tooling?  It’s integrating automation that expands beyond configuration into managing sequence (aka orchestration), service orientation, script modularity (aka composibility) and multi-platform abstraction (aka hybrid).

My ops automation experience says that these four factors must be solved together because they are interconnected.

What would a platform that embraced all these ideas look like?  Here is what we’ve been working towards with Digital Rebar at RackN:

Mono-Infrastructure IT “Hybrid DevOps”
Locked into a single platform Portable between sites and infrastructures with layered ops abstractions.
Limited interop between tools Adaptive to mix and match best-for-job tools.  Use the right scripting for the job at hand and never force migrate working automation.
Ad hoc security based on site specifics Secure using repeatable automated processes.  We fail at security when things get too complex change and adapt.
Difficult to reuse ops tools Composable Modules enable Ops Pipelines.  We have to be able to interchange parts of our deployments for collaboration and upgrades.
Fragile Configuration Management Service Oriented simplifies API integration.  The number of APIs and services is increasing.  Configuration management is not sufficient.
 Big bang: configure then deploy scripting Orchestrated action is critical because sequence matters.  Building a cluster requires sequential (often iterative) operations between nodes in the system.  We cannot build robust deployments without ongoing control over order of operations.

Should we call this “Hybrid Devops?”  That sounds so buzz-wordy!

I’ve come to believe that Hybrid DevOps is the right name.  More technical descriptions like “composable ops” or “service oriented devops” or “cross-platform orchestration” just don’t capture the real value.  All these names fail to capture the portability and multi-system flavor that drives the need for user control of hybrid in multiple dimensions.

Simply put, we need devops without borders!

What do you think?  Do you have a better term?

Deployment Fidelity – reducing tooling transistions for fun and profit

At the OpenStack Tokyo summit, I gave a short interview on Deployment Fidelity.  I’ve come to see the fidelity problem more broadly as the hybrid DevOps challenge that I described in my 2016 Predictions post as the end of mono-clouds.  Thanks Ken Hui from OpenStack Superuser TV for resurfacing this link!

How do platforms die? One step at a time [the Fidelity Gap]

The RackN team is working on the “Start to Scale” position for Digital Rebar that targets the IT industry-wide “fidelity gap” problem.  When we started on the Digital Rebar journey back in 2011 with Crowbar, we focused on “last mile” problems in metal and operations.  Only in the last few months did we recognize the importance of automating smaller “first mile” desktop and lab environments.

A fidelityFidelity Gap gap is created when work done on one platform, a developer laptop, does not translate faithfully to the next platform, a QA lab.   Since there are gaps at each stage of deployment, we end up with the ops staircase of despair.

These gaps hide defects until they are expensive to fix and make it hard to share improvements.  Even worse, they keep teams from collaborating.

With everyone trying out Container Orchestration platforms like Kubernetes, Docker Swarm, Mesosphere or Cloud Foundry (all of which we deploy, btw), it’s important that we can gracefully scale operational best practices.

For companies implementing containers, it’s not just about turning their apps into microservice-enabled immutable-rock stars: they also need to figure out how to implement the underlying platforms at scale.

My example of fidelity gap harm is OpenStack’s “all in one, single node” DevStack.  There is no useful single system OpenStack deployment; however, that is the primary system for developers and automated testing.  This design hides production defects and usability issues from developers.  These are issues that would be exposed quickly if the community required multi-instance development.  Even worse, it keeps developers from dealing with operational consequences of their decisions.

What are we doing about fidelity gaps?  We’ve made it possible to run and faithfully provision multi-node systems in Digital Rebar on a relatively light system (16 Gb RAM, 4 cores) using VMs or containers.  That system can then be fully automated with Ansible, Chef, Puppet and Salt.  Because of our abstractions, if deployment works in Digital Rebar then it can scale up to 100s of physical nodes.

My take away?  If you want to get to scale, start with the end in mind.

Deploy to Metal? No sweat with RackN new Ansible Dynamic Inventory API

Content originally posted by Ansibile & RackN so I added a video demo.  Also, see Ansible’s original post for more details about the multi-vendor “Simple OpenStack Initiative.”

The RackN team takes our already super easy Ansible integration to a new level with added SSH Key control and dynamic inventory with the recent OpenCrowbar v2.3 (Drill) release.  These two items make full metal control more accessible than ever for Ansible users.

The platform offers full key management.  You can add keys at the system. deployment (group of machines) and machine levels.  These keys are operator settable and can be added and removed after provisioning has been completed.  If you want to control access to groups on a servers or group of server basis, OpenCrowbar provides that control via our API, CLI and UI.

We also provide a API path for Ansible dynamic inventory.  Using the simple Python client script (reference example), you can instantly a complete upgraded node inventory of your system.  The inventory data includes items like number of disks, cpus and amount of RAM.  If you’ve grouped machines in OpenCrowbar, those groups are passed to Ansible.  Even better, the metadata schema includes the networking configuration and machine status.

With no added configuration, you can immediately use Ansible as your multi-server CLI for ad hoc actions and installation using playbooks.

Of course, the OpenCrowbar tools are also available if you need remote power control or want a quick reimage of the system.

RackN respects that data centers are heterogenous.  Our vision is that your choice of hardware, operating system and network topology should not break devops deployments!  That’s why we work hard to provide useful abstracted information.  We want to work with you to help make sure that OpenCrowbar provides the right details to create best practice installations.

For working with bare metal, there’s no simpler way to deliver consistent repeatable results

DefCore Update – slowly taming the Interop hydra.

Last month, the OpenStack board charged the DefCore committee to tighten the specification. That means adding more required capabilities to the guidelines and reducing the number of exceptions (“flags”).  Read the official report by Chris Hoge.

Cartography by Dave McAlister is licensed under a. Creative Commons Attribution 4.0 International License.

It turns out interoperability is really, really hard in heterogenous environments because it’s not just about API – implementation choices change behavior.

I see this in both the cloud and physical layers. Since OpenStack is setup as a multi-vendor and multi-implementation (private/public) ecosystem, getting us back to a shared least common denominator is a monumental challenge. I also see a similar legacy in physical ops with OpenCrowbar where each environment is a snowflake and operators constantly reinvent the same tooling instead of sharing expertise.

Lack of commonality means the industry wastes significant effort recreating operational knowledge for marginal return. Increasing interop means reducing variations which, in turn, increases the stakes for vendors seeking differentiation.

We’ve been working on DefCore for years so that we could get to this point. Our first real Guideline, 2015.03, was an intentionally low bar with nearly half of the expected tests flagged as non-required. While the latest guidelines do not add new capabilities, they substantially reduce the number of exceptions granted. Further, we are in process of adding networking capabilities for the planned 2016.01 guideline (ready for community review at the Tokyo summit).

Even though these changes take a long time to become fully required for vendors, we can start testing interoperability of clouds using them immediately.

While, the DefCore guidelines via Foundation licensing policy does have teeth, vendors can take up to three years [1] to comply. That may sounds slow, but the real authority of the program comes from customer and vendor participation not enforcement [2].

For that reason, I’m proud that DefCore has become a truly diverse and broad initiative.

I’m further delighted by the leadership demonstrated by Egle Sigler, my co-chair, and Chris Hoge, the Foundation staff leading DefCore implementation.  Happily, their enthusiasm is also shared by many other people with long term DefCore investments including mid-cycle attendees Mark Volker (VMware), Catherine Deip (IBM) who is also a RefStack PTL, Shamail Tahir (EMC), Carol Barrett (Intel), Rocky Grober (Huawei), Van Lindberg (Rackspace), Mark Atwood (HP), Todd Moore (IBM), Vince Brunssen (IBM). We also had four DefCore related project PTLs join our mid-cycle: Kyle Mestery (Neutron), Nikhil Komawar (Glance),  John Dickinson (Swift), and Matthew Treinish (Tempest).

Thank you all for helping keep DefCore rolling and working together to tame the interoperability hydra!

[1] On the current schedule – changes will now take 1 year to become required – vendors have a three year tail! Three years? Since the last two Guideline are active, the fastest networking capabilities will be a required option is after 2016.01 is superseded in January 2017. Vendors who (re)license just before that can use the mark for 12 months (until January 2018!)

[2] How can we make this faster? Simple, consumers need to demand that their vendor pass the latest guidelines. DefCore provides Guidelines, but consumers checkbooks are the real power in the ecosystem.

When Two Right Decisions Make Things Wrong [Digital Management Series, 7 of 8]

In this 7th Installation IN AN 8 POST SERIES, BRAD SZOLLOSE AND ROB HIRSCHFELD INVITE YOU TO SHARE IN OUR DISCUSSION ABOUT FAILURES, FIGHTS AND FRIGHTENING TRANSFORMATIONS GOING ON AROUND US AS DIGITAL WORK CHANGES WORKPLACE DELIVERABLES, PLANNING AND CULTURE.

The Duality Trap is one digital management danger that’s so destructive, we felt this series would be incomplete without a discussion. It’s especially problematic for The Digital Native managers and often mishandled by traditionally trained ones too.

Each apple is delicious. Which would you choose?

Each apple is delicious. Which would you choose?

The Duality Trap occurs when there are multiple right answers to a question. How often does this happen? Every single time. In fact, it’s a side effect of good digital management. Why?

In hierarchical management, the boss is always right so there’s no duality. Since we’ve thrown out hierarchical decision making, every team action is potentially subject to review by everyone on the team. The very loose structure that allows individual autonomy and rapid response has the natural consequence of also creating cognitive friction when individuals approach problems differently.

These different approaches are generally all valid ways to progress.

Digital natives fundamentally understand choice duality and may present alternatives just to ensure team diversity. Unfortunately, while where may be multiple valid solutions, the team can only pick one [1]. Nine times out of ten, the team will simply pick and move on. In that outlier case, they are counting on you, their digital manager, to resolve the selection.

Here’s the trap: resolving a duality does not mean “picking the winner” because having a winner implies the choices were unequal. If you’re team is stuck then there are at least two good choices.

If you are a traditional manager, the temptation to become Ronald “the decider” Reagan is nearly irresistible. Under the title=authority to decide model, you must justify your salary with making a “right” decision. You’ve been waiting for this moment to exert your authority for days. But, unbeknownst to “the decider,” this big moment will immediately undermine the team’s autonomy. On the other hand, If you are a digital native then this is the moment you’ve been dreading because you’ve got to be decisive. Despite 5 to 10 really good choices, you have to make ONE. So, a digital native can appear to be indecisive. However, not deciding is the worst possible choice. So what should you do?

First, remember that teams are strengthened when they are clearly aligned around an intent.

Resolving the duality trap is an opportunity to emphasize your intent. The best approach is to ask your team to review the options again in light of your shared objectives. In many cases, they will be able to resolve the issue from that perspective. If not, then you should:

  1. validate all options could work
  2. have the team state desired outcomes that can be measured
  3. pick the option that most aligns with your intent
  4. ask if the option your team does choose fit the overall agenda of; speed of delivery but quality drops, quality of deep diving into the project (upping the quality) but you may miss a crucial deadline (this may narrow down your choices.
  5. ask the team to monitor for the results

In this case, even as you are driving a decision, you are still sharing the responsibility for the outcome with the team. It’s important for the team that you focus on the desired results and not on which course was chosen. It is very likely that any of the choices would work out and achieve positive outcomes.

So it’s OK to get out of the trap of picking “best” options when there are multiple right choices.  

In an age of ambiguity, it is easy to fall into the duality trap. Just remember, there is no one way to get it all done these days. Which means a GREAT people manager realizes 2 things; a) your people need more of your support than ever. This comes in the form of training, finding solutions, and building a team that has the right chemistry. And b) getting out of their way.

Get ready as we wrap up this series in post 8: Transitioning from a Bossy Boss into a Digital Age Leader.

[1] If you are in a situation where you an allow divergence for minimal cost (like which phone brand people use) then do not force your team to choose!

Setting The Tempo: 12 Tips for Winning at Digital Management [post 6 of 8]

In this 6th Installation IN AN 8 POST SERIES, BRAD SZOLLOSE AND ROB HIRSCHFELD INVITE YOU TO SHARE IN OUR DISCUSSION ABOUT FAILURES, FIGHTS AND FRIGHTENING TRANSFORMATIONS GOING ON AROUND US AS DIGITAL WORK CHANGES WORKPLACE DELIVERABLES, PLANNING AND CULTURE.

Our advice comes down to very simple concept: Today’s leaders MUST walk the talk.

Drummers Get The GirlsManagement authority in digital work comes from being the owner of the intention. Your team is working towards a shared goal. That is their motivation and it’s required for digital managers to provide a clear goal – this is what we call the intent of your organization.  So a manager’s job comes down to sharing your organization’s intent.

Like the 80’s “management by walking around,” walking the intent means that you spend most of your time helping your team understand the goals, not telling them how to achieve greatness. Managers provide alignment, not direction.

What does digital management look like:

  1. Pick a tone and repeat, repeat, repeat – You are the Jazz leader setting the tempo and harmony, your consistency allows others to improvise. If you set the stage, you can encourage others to take the lead off your base. Strong management is not about control. Strong management is about support. Support that streamlines productivity.
  2. Encourage cross-communication – Better, make people talk to each other. it’s OK to proxy, but don’t carry opinions for your reports as if they were your own. And don’t be upset if someone goes “above” you in the hierarchy. There is no such thing anymore.
  3. 1-to-1 communication is healthy – do a lot of it. 1) Don’t make decisions that way. 2) Don’t get stuck having 1-to-1 with the same people. 3) a lot of informal/small interactions are OK. Diversity is key. You may have to reply/rehash/proxy a whole 1-to-1 discussion for your team
  4. Learn your Culture – This may be the hardest thing for leaders to do because if they always assumed that culture didn’t matter. In today’s work environments, culture matters more than you could imagine. Just ask Peter Drucker!  Knowing who does what is important. Knowing how each individual communicates and what their strengths and weaknesses are is even more important.
  5. “Yes, AND…” The cornerstone of Improv is about saying yes to ideas, even fragile ones. Then it becomes about testing, experimenting and pushing boundaries. This is where innovation comes from. Saying yes and, instead of no but, ensures things get customized. Yes, you might fail, but fail fast, and move on.
  6. Be forceful on time keeping – make sure debates and discussions have known upfront limitations. Movement is good, uncertainty is frustrating.
  7. Check and adjust – check and don’t change is just as important. The key is to involve your team in the check-ups.  When you decide not to adjust, that’s also a decision to communicate.
  8. Don’t apologize for or delay making top down decisions – not all actions are team discussions. Sometimes, the team process is tiring and hard so the most strident voice wins.  No team always agrees so don’t be afraid to play the role of arbitrator.
  9. Fix personnel issues quickly – allowing people to abuse the system drives away the behaviors that you want. Focus instead on strengths, and become the mediator.  Be very sensitive to stereotypes and even mild no name calling. Focus on the work, the outcomes and how everyone can do better. then hold them accountable to their word.
  10. Ask people to define their own expected results – then keep them accountable. When they miss, have no-blame a post-mortem that focus on improvement. A term called the Feedback Sandwich helps by starting a difficult conversation with something a team member did right, then work your way through the conversation to the “meat” part of the sandwich: what they did that needed help, improvement or an admission that they might NOT be the person best qualified for that task. Let them state this on their own by asking better questions.
  11. Assume failures are from system, not individual – work together to fix the system. Communication and hand off are usually the biggest fails when meeting deadlines. Find solutions from the team. after all, who knows development operations better than the people working in it.
  12. Be careful about highlighting “grenade divers” [1] – All organizations need heroes, but feeding them will erode team performance. Once, they may have saved the day. When it becomes a habit, they might be creating the chaos they are always solving in order to have job security. After all, they seem to be the only one who can solve that problem…every time. In a symphony only a few get the solo. In Jazz, you play both solo and support. That flexibility gives your team strength.

These ideas may push your outside your comfort zone.  Find a peer for support!  You need to to be strong to lead from the back.  

Even without formal hierarchies, manager roles are still needed to drive value and make the hard calls. Before, that translated into make all the decisions. The new challenge is to allow for free falls (post 4) while sharing the responsibility.

If you walk your intent and communicate goals consistently then your team will be able to follow your lead.

Next up: When Two Right Decisions Make Things Wrong

[1] Grenade Diving or “wearing the cape” is a team anti-pattern where certain individuals are compelled to take dramatic actions to rescue an adverse situation.  While they often appear to be team heroes (Brad saved the batch of cookies again!  Who forget to set the timer?), the result always distracts from the people who work hard to avoid emergencies.  We want people to step up when required but it should not become a pattern.

DNS is critical – getting physical ops integrations right matters

Why DNS? Maintaining DNS is essential to scale ops.  It’s not as simple as naming servers because each server will have multiple addresses (IPv4, IPv6, teams, bridges, etc) on multiple NICs depending on the systems function and applications. Plus, Errors in DNS are hard to diagnose.

Names MatterI love talking about the small Ops things that make a huge impact in quality of automation.  Things like automatically building a squid proxy cache infrastructure.

Today, I get to rave about the DNS integration that just surfaced in the OpenCrowbar code base. RackN CTO, Greg Althaus, just completed work that incrementally updates DNS entries as new IPs are added into the system.

Why is that a big deal?  There are a lot of names & IPs to manage.

In physical ops, every time you bring up a physical or virtual network interface, you are assigning at least one IP to that interface. For OpenCrowbar, we are assigning two addresses: IPv4 and IPv6.  Servers generally have 3 or more active interfaces (e.g.: BMC, admin, internal, public and storage) so that’s a lot of references.  It gets even more complex when you factor in DNS round robin or other common practices.

Plus mistakes are expensive.  Name resolution is an essential service for operations.

I know we all love memorizing IPv4 addresses (just wait for IPv6!) so accurate naming is essential.  OpenCrowbar already aligns the address 4th octet (Admin .106 goes to the same server as BMC .106) but that’s not always practical or useful.  This is not just a Day 1 problem – DNS drift or staleness becomes an increasing challenging problem when you have to reallocate IP addresses.  The simple fact is that registering IPs is not the hard part of this integration – it’s the flexible and dynamic updates.

What DNS automation did we enable in OpenCrowbar?  Here’s a partial list:

  1. recovery of names and IPs when interfaces and systems are decommissioned
  2. use of flexible naming patterns so that you can control how the systems are registered
  3. ability to register names in multiple DNS infrastructures
  4. ability to understand sub-domains so that you can map DNS by region
  5. ability to register the same system under multiple names
  6. wild card support for C-Names
  7. ability to create a DNS round-robin group and keep it updated

But there’s more! The integration includes both BIND and PowerDNS integrations. Since BIND does not have an API that allows incremental additions, Greg added a Golang service to wrap BIND and provide incremental updates and deletes.

When we talk about infrastructure ops automation and ready state, this is the type of deep integration that makes a difference and is the hallmark of the RackN team’s ops focus with RackN Enterprise and OpenCrowbar.