10x Faster Today but 10x Harder to Maintain Tomorrow: the Cul-De-Sac problem

I’ve been digging into what it means to be a site reliability engineer (SRE) and thinking about my experience trying to automate infrastructure in a way to scales dramatically better.  I’m not thinking about scale in number of nodes, but in operator efficiency.  The primary way to create that efficiency is limit site customization and to improve reuse.  Those changes need to start before the first install.

As an industry, we must address the “day 2” problem in collaboratively developed open software before users’ first install.

Recently, RackN asked the question “Shouldn’t we have Shared Automation for Commodity Infrastructure?” which talked about fact that we, as an industry, keep writing custom automation for what should be commodity servers.  This “snow flaking” happens because there’s enough variation at the data center system level that it’s very difficult to share and reuse automation on an ongoing basis.

Since variation enables innovation, we need to solve this problem without limiting diversity of choice.

 

Happily, platforms like Kubernetes are designed to hide these infrastructure variations for developers.  That means we can expect a productivity explosion for the huge number of applications that can narrowly target platforms.  Unfortunately, that does nothing for the platforms or infrastructure bound applications.  For this lower level software, we need to accept that operations environments are heterogeneous.

 

I realized that we’re looking at a multidimensional problem after watching communities like OpenStack struggle to evolve operations practice.

It’s multidimensional because we are building the operations practice simultaneously with the software itself.  To make things even harder, the infrastructure and dependencies are also constantly changing.  Since this degree of rapid multi-factor innovation is the new normal, we have to plan that our operations automation itself must be as upgradable.

If we upgrade both the software AND the related deployment automation then each deployment will become a cul-de-sac after day 1.

For open communities, that cul-de-sac challenge limits projects’ ability to feed operational improvements back into the user base and makes it harder for early users to stay current.  These challenges limit the virtuous feedback cycles that help communities grow.  

The solution is to approach shared project deployment automation as also being continuously deployed.

This is a deceptively hard problem.

This is a hard problem because each deployment is unique and those differences make it hard to absorb community advances without being constantly broken.  That is one of the reasons why company opt out of the community and into vendor distributions. While Vendors are critical to the ecosystem, the practice ultimately limits the growth and health of the community.

Our approach at RackN, as reflected in open Digital Rebar, is to create management abstractions that isolate deployment variables based on system level concerns.  Unlike project generated templates, this approach absorbs heterogeneity and brings in the external information that often complicate project deployment automation.  

We believe that this is a general way to solve the broader problem and invite you to participate in helping us solve the Day 2 problems that limit our open communities.

Why is RackN advancing OpenStack on Kubernetes?

Yesterday, RackN CEO, Rob Hirschfeld, described the remarkable progress in OpenStack on Kubernetes using Helm (article link).  Until now, RackN had not been willing to officially support OpenStack deployments; however, we now believe that this approach is a game changer for OpenStack operators even if they are not actively looking at Kubernetes.

We are looking for companies that want to join in this work and fast-track it into production. If this is interesting, please contact us at sre@rackn.com.

Why should you sponsor? Current OpenStack operators facing “fork-lift upgrades” should want to find a path like this one that ensures future upgrades are baked into the plan. This approach provide a fast track to a general purpose, enterprise grade, upgradable Kubernetes infrastructure.

Here is Rob’s Demo

Rob’s Original Blog Post

RackN revisits OpenStack deployments with an eye on ongoing operations. I’ve been an outspoken skeptic of a Joint OpenStack Kubernetes Environment because I felt that the technical hurdles of cloud native architecture would prove challenging.

I was wrong: I underestimated how fast these issues could be addressed.

… read the rest at Beyond Expectations: OpenStack via Kubernetes Helm (Fully Automated with Digital Rebar) — Rob Hirschfeld

OpenStack Interop, Container Security, Install & Open Source Posts

In case you missed it, I posted A LOT of content this week on other sites covering topics for OpenStack Interop, Container Security, Anti-Universal Installers and Monetizing Open Source.  Here are link-bait titles & blurbs from each post so you can decide which topics pique your interest.

Thirteen Ways Containers are More Secure than Virtual Machines on TheNewStack.com

Last year, conventional wisdom had it that containers were much less secure than virtual machines (VMs)! Since containers have such thin separating walls; it was easy to paint these back door risks with a broad brush.  Here’s a reality check: Front door attacks and unpatched vulnerabilities are much more likely than these backdoor hacks.

It’s Time to Slay the Universal Installer Unicorn on DevOps.com 

While many people want a universal “easy button installer,” they also want it to work on their unique snowflake of infrastructures, tools, networks and operating systems.  Because there is so much needful variation and change, it is better to give up on open source projects trying to own an installer and instead focus on making their required components more resilient and portable.

King of the hill? Discussing practical OpenStack interoperability on OpenStack SuperUser

Can OpenStack take the crown as cloud king? In our increasingly hybrid infrastructure environment, the path to the top means making it easier to user to defect from the current leaders (Amazon AWS; VMware) instead of asking them to blaze new trails. Here are my notes from a recent discussion about that exact topic…

Have OpenSource, Will Profit?! 5 thoughts from Battery Ventures OSS event on RobHirschfeld.com

As “open source eats software” the profit imperative becomes ever more important to figure out.  We have to find ways to fund this development or acknowledge that software will simply become waste IP and largess from mega brands.  The later outcome is not particularly appealing or innovative.

Thanks! I’m enjoying my conversation with you

I write because I love to tell stories and to think about how actions we take today will impact tomorrow.  Ultimately, everything here is about a dialog with you because you are my sounding board and my critic.  I appreciate when people engage me about posts here and extend the conversation into other dimensions.  Feel free to call me on points and question my position – that’s what this is all about.

Thank you for being at part of my blog and joining in.  I’m looking forward to hearing more from you.

During the OpenStack Summit, I got to lead and participate in some excellent presentations and panels.  While my theme for this summit was interoperability, there are many other items discussed.

I hope you enjoy them.

Did one of these topics stand out?  Is there something I missed?  Please let me know!

DevOps approaches to upgrade: Cube Visualization

I’m working on my OpenStack summit talk about DevOps upgrade patterns and got to a point where there are three major vectors to consider:

  1. Step Size (shown as X axis): do we make upgrades in small frequent steps or queue up changes into larger bundles? Larger steps mean that there are more changes to be accommodated simultaneously.
  2. Change Leader (shown as Y axis): do we upgrade the server or the client first? Regardless of the choice, the followers should be able to handle multiple protocol versions if we are going to have any hope of a reasonable upgrade.
  3. Safeness (shown as Z axis): do the changes preserve the data and productivity of the entity being upgraded? It is simpler to assume to we simply add new components and remove old components; this approach carries significant risks or redundancy requirements.

I’m strongly biased towards continuous deployment because I think it reduces risk and increases agility; however, I laying out all the vertices of the upgrade cube help to visualize where the costs and risks are being added into the traditional upgrade models.

Breaking down each vertex:

  1. Continuous Deploy – core infrastructure is updated on a regular (usually daily or faster) basis
  2. Protocol Driven – like changing to HTML5, the clients are tolerant to multiple protocols and changes take a long time to roll out
  3. Staged Upgrade – tightly coordinate migration between major versions over a short period of time in which all of the components in the system step from one version to the next together.
  4. Rolling Upgrade – system operates a small band of versions simultaneously where the components with the oldest versions are in process of being removed and their capacity replaced with new nodes using the latest versions.
  5. Parallel Operation – two server systems operate and clients choose when to migrate to the latest version.
  6. Protocol Stepping – rollout of clients that support multiple versions and then upgrade the server infrastructure only after all clients have achieved can support both versions.
  7. Forced Client Migration – change the server infrastructure and then force the clients to upgrade before they can reconnect.
  8. Big Bang – you have to shut down all components of the system to upgrade it

This type of visualization helps me identify costs and options. It’s not likely to get much time in the final presentation so I’m hoping to hear in advance if it resonates with others.

PS: like this visualization? check out my “magic 8 cube” for cloud hosting options.

Installing SSD + Windows8 = Blank Primary Monitor (fixable!)

2012-10-28_12-44-51_691I could not find the solution to this easily, so I’m leaving a breadcrumb trail here… I did not keep the links so I cannot give proper attribution but will try to pay it forward.

Short version: Try HDMI/VGA output if your Win8 primary monitor is blank.  Then update the BIOS.

Long version:

I decided to update my wife’s Dell Inspiron N5110 laptop to an SSD and Windows 8.  Sadly, the machine’s factory config had a very slow HDD and that was impacting the system’s total performance.  Replacing the HDD with an SSD required major surgery to the laptop – it is not for the faint of heart.

After installing the SSD and installing Windows 8 (painless!) the system booted though the splash screen and turned off the display.  Yes, it simply went completely blank.

I stumbled upon a tip that suggested that the system was working but using the HDMI output.  That proved correct.  I was able to complete the configuration using HDMI and/or VGA monitors.

Even after completing and updating the monitor (still blank) was clearly working because the BIOS screens and splash screen worked on the monitor.  Deleting the Video Card from Devices did NOT work.

Ultimately, I found a site that recommended updating the BIOS (was A09, now A11 from 11/2012).  The BIOS update corrected the problem.

I should have known to update the BIOS and firmware before starting the upgrade.  I hope you learn from my experience.

Oh…. the SSD+Win8 made an AMAZING performance difference.  It’s like a brand new 10x faster laptop and an excellent investment.  I’ve become a bit of a Linux appologist; however, I was pleasantly surprised to find Windows 8 to be very usable once I learned the latest hot-key assignments (Search on Win key -> Win+F).