OpenStack’s Big Pivot: our suggestion to drop everything and focus on being a Kubernetes VM management workload

TL;DR: Sometimes paradigm changes demand a rapid response and I believe unifying OpenStack services under Kubernetes has become an such an urgent priority that we must freeze all other work until this effort has been completed.

See Also Rob’s VMblog.com post How is OpenStack so dead AND yet so very alive

By design, OpenStack chose to be unopinionated about operations.

pexels-photo-422290That made sense for a multi-vendor project that was deeply integrated with the physical infrastructure and virtualization technologies.  The cost of that decision has been high for everyone because we did not converge to shared practices that would drive ease of operations, upgrade or tuning.  We ended up with waves of vendors vying to have the the fastest, simplest and openest version.  

Tragically, install became an area of competition instead an area of collaboration.

Containers and microservice architecture (as required for Kubernetes and other container schedulers) is providing an opportunity to correct this course.  The community is already moving towards containerized services with significant interest in using Kubernetes as the underlay manager for those services.  I’ve laid out the arguments for and challenges ahead of this approach in other places.  

These technical challenges involve tuning the services for cloud native configuration and immutable designs.  They include making sure the project configurations can be injected into containers securely and the infra-service communication can handle container life-cycles.  Adjacent concerns like networking and storage also have to be considered.  These are all solvable problems that can be more quickly resolved if the community acts together to target just one open underlay.

The critical fact is that the changes are manageable and unifying the solution makes the project stronger.

Using Kubernetes for OpenStack service management does not eliminate or even solve the challenges of deep integration.  OpenStack already has abstractions that manage vendor heterogeneity and those abstractions are a key value for the project.  Kubernetes solves a different problem: it manages the application services that run OpenStack with a proven, understood pattern.  By adopting this pattern fully, we finally give operators consistent, shared and open upgrade, availability and management tooling.

Having a shared, open operational model would help drive OpenStack faster.

There is a risk to this approach: driving Kubernetes as the underlay for OpenStack will force OpenStack services into a more narrow scope as an infrastructure service (aka IaaS).  This is a good thing in my opinion.   We need multiple abstractions when we build effective IT systems.  

The idea that we can build a universal single abstraction for all uses is a dangerous distraction; instead; we need to build platform layers collaborativity.  

While initially resisting, I have become enthusiatic about this approach.  RackN has been working hard on the upgradable & highly available Kubernetes on Metal prerequisite.  We’ve also created prototypes of the fully integrated stack.  We believe strongly that this work should be done as a community effort and not within a distro.

My call for a Kubernetes underlay pivot embraces that collaborative approach.  If we can keep these platforms focused on their core value then we can build bridges between what we have and our next innovation.  What do you think?  Is this a good approach?  Contact us if you’d like to work together on making this happen.

See Also Rob’s VMblog.com post How is OpenStack so dead AND yet so very alive to SREs? 

What does it take to Operate Open Platforms? Answers in Datanauts 72

Did I just let OpenStack ops off the hook….?  Kubernetes production challenges…?  

ix34grhy_400x400I had a lot of fun in this Datanauts wide ranging discussion with unicorn herders Chris Wahl and Ethan Banks.  I like the three section format because it gives us a chance to deep dive into distinct topics and includes some out-of-band analysis by the hosts; however, that means you need to keep listening through the commercial breaks to hear the full podcast.

Three parts?  Yes, Chris and Ethan like to save the best questions for last.

In Part 1, we went deep into the industry operational and business challenges uncovered by the OpenStack project. Particularly, Chris and I go into “platform underlay” issues which I laid out in my “please stop the turtles” post. This was part of the build-up to my SRE series.

In Part 2, we explore my operations-focused view of the latest developments in container schedulers with a focus on Kubernetes. Part of the operational discussion goes into architecture “conceits” (or compromises) that allow developers to get the most from cloud native design patterns. I also make a pitch for using proven tools to run the underlay.

In Part 3, we go deep into DevOps automation topics of configuration and orchestration. We talk about the design principles that help drive “day 2” automation and why getting in-place upgrades should be an industry priority.  Of course, we do cover some Digital Rebar design too.

Take a listen and let me know what you think!

On Twitter, we’ve already started a discussion about how much developers should care about infrastructure. My opinion (posted here) is that one DevOps idea where developers “own” infrastructure caused a partial rebellion towards containers.

SRE role with DevOps for Enterprise [@HPE podcast]

sre-series

My focus on SRE series continues… At RackN, we see a coming infrastructure explosion in both complexity and scale. Unless our industry radically rethinks operational processes, current backlogs will escalate and stability, security and sharing will suffer.

Yes, DevOps and SRE are complementary

In this short 16 minute podcast, HPE’s Stephen Spector and I discuss how DevOps and SRE thinking overlaps and where are the differences.  We also discuss how Enterprises should be evaluating Site Reliability Engineering as a function and where it fits in their organization.

Spiraling Ops Debt & the SRE coding imperative

This post is part of an SRE series grounded in the ideas inspired by the Google SRE book.

2/13 Update: You can hear an INTERACTIVE DISCUSSION based on this post with Eric Wright on his podcast, GC Online.

Every Ops team I know is underwater and doesn’t have the time to catch their breath.

Why does the load increase and leave Ops behind?  It’s because IT is increasingly fragmented and siloed by both new tech and past behaviors.  Many teams simply step around their struggling compatriots and spin up yet more Ops work adding to the backlog. Dashing off yet another Ansible playbook to install on AWS is empowering but ultimately adds to the Ops sustaining backlog.

c2wfuvaveaaronn

Ops Tsunami

That terrifying observation two years ago led me to create this graphic showing how operations is getting swamped by new demand for infrastructure.

It’s not just the amount of infrastructure: we’ve got an unbounded software variation problem too.

It’s unbounded because we keep rapidly evolving new platforms and those platforms are build on rapidly evolving components.  For example, Kubernetes has a 3 month release cycle.  That’s really fast; however, it built on other components like Docker, SDN and operating systems that also have fast release cycles.  That means that even your single Kubernetes infrastructure has many moving parts that may not be consistent in your own organization.  For example, cloud deploys may use CoreOS while internal ones use a Corporate approved Centos.

And the problem will get worse because infrastructure is cheap and developer productivity is improving.

Since then, we’ve seen an container fueled explosion in developer productivity and AI driven-rise in new hardware-flavored instances. Both are power drivers of infrastructure consumption; however, we have not seen a matching leap in operations tooling (that’s a future post topic!).

That’s why the Google SRE teams require a 50% automation vs Ops ratio.  

If the ratio is >50 then the team slowly sinks under growing operational load.  If you are not actively decreasing the load via automation then your teams get underwater and basic ops hygiene fails.

This is not optional – if you are behind now then it will just get worse!

The escape from the cycle is to get help.  Stop writing automation that you can buy or re-use.  Get help running it.  Don’t waste time solving problems that other people have solved.  That may mean some upfront learning and investment but if you aren’t getting out of your own way then you’ll be run over.

 

Who’s the grown-up here?  It’s the VM not the Iron!

This ANALOGY exploring Virtual vs Physical Ops is Joint posting by Rob Hirschfeld, RackN, and Russel Doty, Redhat.RUSSEL DOTY

babyCompared to provisioning physical servers, getting applications running in a virtual machine is like coaching an adult soccer team – the players are ready, you just have to get them to the field and set the game in motion.  The physical servers can be compared to a grade school team – tremendous potential, but they can require a lot of coaching and intervention. And they don’t always play nice.

Russell Doty and I were geeking on the challenges of configuring physical servers when we realized that our friends in cloud just don’t have these problems.  When they ask for a server, it’s delivered to them on a platter with an SLA.  It’s a known configuration – calm, rational and well-behaved.  By comparison, hardware is cranky, irregular and sporadic.  To us, it sometimes feels like we are more in the babysitting business. Yes, we’ve had hardware with the colic!

Continuing the analogy, physical operations requires a degree of child-proofing and protection that is (thankfully) hidden behind cloud abstractions of hardware.  More importantly, it requires a level of work that adults take for granted like diaper changes (bios/raid setup), food preparation (network configs), and self-entertainment (O/S updates).

And here’s where the analogy breaks down…

The irony here is that the adults (vms) are the smaller, weaker part of the tribe.  Not only that, these kids have to create the environment that the “adults” run on.

If you’re used to dealing with adults to get work done, you’re going to be in for a shock when you ask the kids to do the same job.

That’s why the cloud is such a productive platform for software.  It’s an adults-only environment – the systems follow the rules and listen to your commands.  Even further, cloud systems know how to dress themselves (get an O/S), rent an apartment (get an IP and connect) and even get credentials (get a driver’s license).

These “little things” are taken for granted in the cloud are not automatic behaviors for physical infrastructure.

Of course, there are trade-offs – most notably performance and “scale up” scalability. The closer you need to get to hardware performance, on cpu, storage, or networks, the closer you need to get to the hardware.

It’s the classic case of standardizing vs. customization. And a question of how much time you are prepared to put into care and feeding!

To thrive, OpenStack must better balance dev, ops and business needs.

OpenStack has grown dramatically in many ways but we have failed to integrate development, operations and business communities in a balanced way.

My most urgent observation from Paris is that these three critical parts of the community are having vastly different dialogs about OpenStack.

Clouds DownAt the Conference, business people were talking were about core, stability and utility while the developers were talking about features, reorganizing and expanding projects. The operators, unfortunately segregated in a different location, were trying to figure out how to share best practices and tools.

Much of this structural divergence was intentional and should be (re)evaluated as we grow.

OpenStack events are split into distinct focus areas: the conference for business people, the summit for developers and specialized days for operators. While this design serves a purpose, the community needs to be taking extra steps to ensure communication. Without that communication, corporate sponsors and users may find it easier to solve problems inside their walls than outside in the community.

The risk is clear: vendors may find it easier to work on a fork where they have business and operational control than work within the community.

Inside the community, we are working to help resolve this challenge with several parallel efforts. As a community member, I challenge you to get involved in these efforts to ensure the project balances dev, biz and ops priorities.  As a board member, I feel it’s a leadership challenge to make sure these efforts converge and that’s one of the reasons I’ve been working on several of these efforts:

  • OpenStack Project Managers (was Hidden Influencers) across companies in the ecosystem are getting organized into their own team. Since these managers effectively direct the majority of OpenStack developers, this group will allow
  • DefCore Committee works to define a smaller subset of the overall OpenStack Project that will be required for vendors using the OpenStack trademark and logo. This helps the business community focus on interoperability and stability.
  • Technical leadership (TC) lead “Big Tent” concept aligns with DefCore work and attempts to create a stable base platform while making it easier for new projects to enter the ecosystem. I’ve got a lot to say about this, but frankly, without safeguards, this scares people in the ops and business communities.
  • An operations “ready state” baseline keeps the community from being able to share best practices – this has become a pressing need.  I’d like to suggest as OpenCrowbar an outside of OpenStack a way to help provide an ops neutral common starting point. Having the OpenStack developer community attempting to create an installer using OpenStack has proven a significant distraction and only further distances operators from the community.

We need to get past seeing the project primarily as a technology platform.  Infrastructure software has to deliver value as an operational tool for enterprises.  For OpenStack to thrive, we must make sure the needs of all constituents (Dev, Biz, Ops) are being addressed.

McCrory on “Cloud Confusion”

or, why is everyone DaaZed and Confused?

Dave McCrory, my co-worker at Dell, posted an interesting analysis of how different roles people have in IT jobs dramatically influences their perception of cloud services.

I think that part of the confusion is how difficult it is for each category of cloud user to see their challenges/issues for the other classes of user.

We see this in spades during internal PaaS discussions.  People with development backgrounds has a fundamentally different concept of a PaaS benefits.  In many cases, those same benefits (delegation to a provider for core services like database) are considered disadvantages for the other class of user (you want someone else to manage what!).

Ultimately, the applications are at the core of any XaaS conversation and define what “type” of cloud need to be consumed.