Can we control Hype & Over-Vendoring?

Q: Is over-vendoring when you’ve had to much to drink?
A: Yes, too much Kool Aid.

There’s a lot of information here – skip to the bottom if you want to see my recommendation.

Last week on TheNewStack, I offered eight ways to keep Kubernetes on the right track (abridged list here) and felt that item #6 needed more explanation and some concrete solutions.

  1. DO: Focus on a Tight Core
  2. DO: Build a Diverse Community
  3. DO: Multi-cloud and Hybrid
  4. DO: Be Humble and Honest
  5. AVOID: “The One Ring” Universal Solution Hubris
  6. AVOID: Over-Vendoring (discussed here)
  7. AVOID: Coupling Installers, Brokers and Providers to the core
  8. AVOID: Fast Release Cycles without LTS Releases

kool-aid-manWhat is Over-Vendoring?  It’s when vendors’ drive their companies’ brands ahead of the health of the project.  Generally by driving an aggressive hype cycle where vendors are trying to jump on the hype bandwagon.

Hype can be very dangerous for projects (David Cassel’s TNS article) because it is easy to bypass the user needs and boring scale/stabilization processes to focus on vendor differentiation.  Unfortunately, common use-cases do not drive differentiation and are invisible when it comes to company marketing budgets.  That boring common core has the effect creating tragedy of the commons which undermines collaboration on shared code bases.

The solution is to aggressively keep the project core small so that vendors have specific and limited areas of coopetition.  

A small core means we do not compel collaboration in many areas of project.  This drives competition and diversity that can be confusing.  The temptation to endorse or nominate companion projects is risky due to the hype cycle.  Endorsements can create a bias that actually hurts innovation because early or loud vendors do not generally create the best long term approaches.  I’ve heard this described as “people doing the real work don’t necessarily have time to brag about it.”

Keeping a small core mantra drives a healthy plug-in model where vendors can differentiate.  It also ensures that projects can succeed with a bounded set of core contributors and support infrastructure.  That means that we should not measure success by commits, committers or lines of code because these will drop as projects successfully modularize.  My recommendation for a key success metric is to the ratio of committers to ecosystem members and users.

Tracking improving ratio of core to ecosystem shows that improving efficiency of investment.  That’s a better sign of health than project growth.

It’s important to note that there is also a serious risk of under-vendoring too!  

We must recognize and support vendors in open source communities because they sustain the project via direct contributions and bringing users.  For a healthy ecosystem, we need to ensure that vendors can fairly profit.  That means they must be able to use their brand in combination with the project’s brand.  Apache Project is the anti-pattern because they have very strict “no vendor” trademark marketing guidelines that can strand projects without good corporate support.

I’ve come to believe that it’s important to allow vendors to market open source projects brands; however, they also need to have some limits on how they position the project.

How should this co-branding work?  My thinking is that vendor claims about a project should be managed in a consistent and common way.  Since we’re keeping the project core small, that should help limit the scope of the claims.  Vendors that want to make ecosystem claims should be given clear spaces for marketing their own brand in participation with the project brand.

I don’t pretend that this is easy!  Vendor marketing is planned quarters ahead of when open source projects are ready for them: that’s part of what feeds the hype cycle. That means that projects will be saying no to some free marketing from their ecosystem.  Ideally, we’re saying yes to the right parts at the same time.

Ultimately, hype control means saying no to free marketing.  For an open source project, that’s a hard but essential decision.

 

Time vs. Materials: $1,000 printer power button

Or why I teach my kids to solder

I just spent four hours doing tech support over a $0.01 part on an $80 inkjet printer.  According to my wife, those hours were a drop in the budget in a long line of comrades-in-geekdom who had been trying to get her office printer printing.  All told, at least $1,000 worth of expert’s time was invested.

It really troubles me when the ratio of purchase cost to support cost exceeds 10x for a brand new device.

In this case, a stuck power button cover forced the printer into a cryptic QA test mode.  It was obvious that the button was stuck, but not so obvious that that effectively crippled the printer.   Ultimately, my 14 year old striped the printer down, removed the $0.01 button cover, accidentally stripped a cable, soldered it back together, and finally repaired the printer.

From a cost perspective, my wife’s office would have been exponentially smarter to dump the whole thing in to the trash and get a new one.   Even the effort of returning it to the store was hardly worth the time lost dealing with the return.

This thinking really, really troubles me.

I have to wonder what it would cost our industry to create products that were field maintainable, easier to troubleshoot, and less likely to fail.  The automotive industry seems to be ahead of us in some respects.  They create products that a reliable, field maintainable, and conform to standards (given Toyota’s recent woes, do I need to reconsider this statement?).  Unfortunately, they are slow to innovate and have become highly constrained by legislative oversight.  Remember the old “If Microsoft made cars” joke?

For the high tech industry, I see systemic challenges driven by a number of market pressures:

  1. Pace of innovation: our use of silicon is just graduating from crawling to baby steps.  Products from the 90s look like stone tablets compared to 10’s offerings.   This is not just lipstick, these innovations disrupt design processes making it expensive to maintain legacy systems.
  2. Time to market: global competitive pressures to penetrate new markets give new customer acquisition design precedence.
  3. Lack of standards: standards can’t keep up with innovation and market pressures.  We’re growing to accept the consensus model for ad hoc standardization.  Personally, I like this approach, but we’re still learning how to keep it fair.
  4. System complexity: To make systems feature rich and cost effective, we make them tightly coupled.  This is great at design time, but eliminates maintainability because it’s impossible to isolate and replace individual components.
  5. Unequal wealth and labor rates:  Our good fortune and high standard of living make it impractical for us to spend time repairing or upgrading.  We save this labor by buying new products made in places where labor is cheap.  These cheap goods often lack quality and the cycle repeats.
  6. Inventory costs: Carrying low-demand, non-standard goods in inventory is expensive.   I can a printer with thousands of resistors soldered onto a board for $89 while buying the same resistors alone would cost more than the whole printer.  Can anyone afford to keep the parts needed for maintenance in stock?
  7. Disposable resources: We deplete limited resources as if they were unlimited.  Not even going to start on this rant…

Looking at these pressures makes the challenge appear overwhelming, but we need to find a way out of this trap.

That sounds like the subject for a future post!