Supply Chain Transparency drives Open Source adoption, 6 reasons besides cost

Author’s note: If you don’t believe that software is manufactured then go directly to your TRS80, do not collect $200.

I’m becoming increasingly impatient with people stating that “open source is about free software” because it’s blatantly untrue as a primary driver for corporate adoption.   Adopting open source often requires companies (and individuals) to trade-off one cost (license expense) for another (building expertise).  It is exactly the same balance we make between insourcing, partnering and outsourcing.

Full Speed Ahead

When I probe companies about what motivates their use of open source, they universally talk about transparency of delivery, non-single-vendor ownership of the source and their ability to influence as critical selection factors.  They are generally willing to invest more to build expertise if it translates into these benefits.  Viewed in this light, licensed software or closed services both cost more and introduce significant business risks where open alternatives exist.

This is not new: its basic manufacturing applied to IT

We had this same conversation in the 90s around manufacturing as that industry joltingly shifted from batch to just-in-time (aka Lean) manufacturing.  The key driver for that transformation was improved integration and management of supply chains.   We review witty doctoral dissertations about inventory, drum-buffer-rope flow and economic order quantity; however, trust my summary that it all comes down to companies need supply chain transparency.

As technology becomes more and more integral to delivering any type of product, companies must extend their need for supply chain transparency into their IT systems too.   That does not mean that companies expect to self-generate (insource) all of their technology.  The goal is to manage the supply chain, not to own every step.   Smart companies find a balance between control of owning their supply (making it themselves) and finding a reliable supply (multi-source is preferred).  If you cannot trust your suppliers then you must create inventory buffers and rigid contracts.  Both of these defenses limit agility and drive systemic dysfunction.  This was the lesson learned from Lean Just-In-Time manufacturing.

What does this look like for IT supply chains?

A healthy supply chain allows companies to address these issues.  They can:

  1. Change vendors / suppliers and get equivalent supply
  2. Check the status of deliveries (features)
  3. Review and impact quality
  4. Take deliverables in small frequent batches
  5. Collaborate with suppliers to manage & control the process
  6. Get visibility into the pipeline

None of these items are specific to software; instead, they are general attributes of a strong supply chain.  In a closed system, companies lose these critical supply chain values.  While tightly integrated partnerships can provide these benefits, they carry a cost premium and inherently limit vendor choice.

This sounds great!  What’s the cost?

You need to consider the level of supply chain transparency that’s right for you.  Most companies are no more likely to refine their own metal than to build from pure open source repositories.  There are transparency benefits from open source even from a single supplier.  Yet in some cases like the OpenStack community, systems are so essential that they are warrant investing as core competencies and joining the contributing community.  Even in those cases, most rely on vendors to package and extend their chosen open source software.

But that misses the point: contributing to an open source project is not required in managing your IT supply chain.  Instead, you need to build the operational infrastructure and processes that is open source ready.  They may require investing in skills and capabilities related to underlying technologies like the operating system, database or configuration management.  For cloud, it is likely to require more investment fault-tolerant architecture and API driven deployment.  Companies that are strong in these skills are better able to manage an open source IT supply chain.  In fact, they are better able to manage any IT supply chain because they have more control.

So, it’s not about cost…

When considering motivations for open source adoption, cost (or technology sizzle) should not be the primary factor.  In my experience, the most successful implementations focus first about operational readiness and project stability, and program transparency.  These questions indicate companies are thinking with an IT supply chain focus.

PS: If you found this interesting, you’ll also like my upstream imperative post.

my lean & open source reading list – recommendations welcome!

Cube Seat

I think it’s worth pulling together a list of essential books that I think should be required reading for people on Lean & open source teams (like mine):

  • Basis for the team values that we practice: The Five Dysfunctions of a Team: A Leadership Fable Patrick Lencioni (amazon)
  • This is a foundational classic for team building:  Peopleware: Productive Projects and Teams (Second Edition) Tom DeMarco (amazon)
  • This novel is good primer for lean and devops The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win Gene Kim and George Spafford & Kevin Behr (amazon)
  • Business Focus on Lean: The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses Eric Ries (amazon)
  • Foundational (and easy) reading about Lean: The Goal: A Process of Ongoing Improvement Eliyahu M. Goldratt (amazon)
  • One of my favorites on Lean / Agile: Implementing Lean Software Development: From Concept to Cash Mary Poppendieck (amazon)
  • Should be required reading for open source (as close to “Open Source for Dummies as you can get): The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary Eric S. Raymond (amazon)
  • Culture Change Liquid Leadership: From Woodstock to Wikipedia–Multigenerational Management Ideas That Are Changing the Way We Run Things Brad Szollose (amazon)
  • More Team Building – this one is INTERACTIVE! http://www.strengthsfinder.com/home.aspx

There are some notable additions, but I think this is enough for now.  I’m always looking for recommendations!  Please post your favorites in the comments!

7 takeaways from DevOps Days Austin

Block Tables

I spent Tuesday and Wednesday at DevOpsDays Austin and continue to be impressed with the enthusiasm and collaborative nature of the DOD events.  We also managed to have a very robust and engaged twitter backchannel thanks to an impressive pace set by Gene Kim!

I’ve still got a 5+ post backlog from the OpenStack summit, but wanted to do a quick post while it’s top of mind.

My takeaways from DevOpsDays Austin:

  1. DevOpsDays spends a lot of time talking about culture.  I’m a huge believer on the importance of culture as the foundation for the type of fundamental changes that we’re making in the IT industry; however, it’s also a sign that we’re still in the minority if we have to talk about culture evangelism.
  2. Process and DevOps are tightly coupled.  It’s very clear that Lean/Agile/Kanban are essential for DevOps success (nice job by Dominica DeGrandis).  No one even suggested DevOps+Waterfall as a joke (but Patrick Debois had a picture of a xeroxed butt in his preso which is pretty close).
  3. Still need more Devs people to show up!  My feeling is that we’ve got a lot of operators who are engaging with developers and fewer developers who are engaging with operators (the “opsdev” people).
  4. Chef Omnibus installer is very compelling.  This approach addresses issues with packaging that were created because we did not have configuration management.  Now that we have good tooling we separate the concerns between bits, configuration, services and dependencies.  This is one thing to watch and something I expect to see in Crowbar.
  5. The old mantra still holds: If something is hard, do it more often.
  6. Eli Goldratt’s The Goal is alive again thanks to Gene Kims’s smart new novel, The Phoenix project, about DevOps and IT  (I highly recommend both, start with Kim).
  7. Not DevOps, but 3D printing is awesome.  This is clearly a game changing technology; however, it takes some effort to get right.  Dell brought a Solidoodle 3D printer to the event to try and print OpenStack & Crowbar logos (watch for this in the future).

I’d be interested in hearing what other people found interesting!  Please comment here and let me know.

Crowbar and our Pivot (or, how we slipped and shipped Grizzly)

Crowbar Grizzly PostMy team at Dell uses Lean process because it forces us to be honest about making hard choices. Our recent decision to pivot back to Crowbar 1.x for the OpenStack Grizzly release is a great example how the pivot process works.

4/24 note: I have a longer post and ISO for Grizzly on Crowbar waiting until we enter QA. The Crowbar community is already very active around this work and you’re encouraged to join.

Like any refactor, there was schedule risk when we started the Crowbar 2.x release. To mitigate this risk, we made two critical choices. First, we choose to advance the OpenStack barclamps on the 1.x code base in parallel with the 2.x work. Second, we chose a pivot date for the team to choose releasing Grizzly on the 1.x or 2.x trunks.

Choosing to jump back to 1.x was one of the hardest choices I’ve made in my career. I’m proud that we had the foresight to keep that as an option and prouder that our team rallied to make it happen.

I acknowledge that 1.x has gaps; however, getting Grizzly into the field for PoCs and pilots with 1.x provide substantial benefits to the community.  That said, there are barclamps for HA deployments and other production features that are under development on the 1.x branch and will be available in the community.

The 2.x code base provides important features but we are building from on the 1.x deployment recipes. This means that development, testing and tuning applied to the Grizzly barclamps will translates directly into Crowbar 2.x field readiness. In fact, more completeness on OpenStack can dramatically simplify Crowbar 2.x testing efforts.  This is especially true on the OpenStack Networking (fka Quantum) barclamps because they are new work.

Delivering solutions is a balance between features, timing and field experience.  The Crowbar team’s preference is to collaborate with operators in the field and that means making workable software available quickly.

I hope that you’ll agree with our approach and help us make Grizzly the most deployable OpenStack yet.

What foo is “contribution” to open source? Mik Kersten & Tasktop @ SXSW

Nested

How do we really know who influences most in a software project?  We can easily track code commits, but there are more bits to the project than the commits.

I had the good fortune to attend Mik Kersten’s Code Graph presentation at SXSW last week. Mik started the Eclipse Mylyn project and went on to found Tasktop. Both are built on the very intriguing concepts that software development production (aka work) is organized around tasks.

His premise is that organizing around tasks provides a more manageable and actionable view of a project than a more traditional application life-cycle management (ALM) approaches.  I’m a sucker for any presentation about lean development process that includes references to both DevOps and industrial engineering (I have an MS in IE), but Mik surprised me by taking his code graph concept to a whole ‘nutha level.

The software value chain is much deeper than just the people who write code. Mik’s approach included managers, testers and operators in the interaction graphs for his projects.

By including all of the ALM artifacts in the analysis, you get a much richer picture of the influencers for a project.

For example, the development manager may never show up as a code committer; however, they are hugely influential in which work gets prioritized. If your graph includes who is touching the work assignments and stories then the manager’s influence jumps out immediately. That knowledge would completely change how and who you may interact with a team. It effectively brings a shadow contributor into the light.

The same is true for QA members who are running tests and opening defects and operators who are building deployment scripts. Ideally, it should include users who exercise different parts of the applications capabilities.

Mik’s graphs clearly showed the influence impact of managers because they touched all of the story cards for the project.  The people who own the story cards are the most potent influencers in a project, yet they are invisible in code repositories!

I would love to see an impact graph for a software project that equally reflected the wide range of contributions that people make to its life-cycle.  This type of information helps rebalance the power in a project.

Industrial engineering legend W.E. Demming‘s advice is to look at production as a system.  Finding ways to show everyone’s contributions is an important step towards bringing lean processes fully into software manufacturing.

I respectfully disagree – we are totally aligned on your lack of understanding

Team FacesOccasionally, my journeys into Agile and Lean process force me down to its foundation: cultural fit.  Frankly, there is nothing more central to the success of a team than culture. That’s especially true about Lean because of the humility and honesty required. If your team is not built on a foundation of trust and shared values then it’s impossible keep having the listening and responsive dialog with our customers.

Successful teams have to be honest about taking negative feedback and you cannot do that without trust.

Trust is built on working out differences. Ideally, it would be as simple as “we agree” or “we disagree.” In an ideal world, every team would be that binary.    Remember, that no team always agrees – it’s how we resolve those differences that makes the team successful.  That’s something we know as “diversity” and it’s like annealing of steel to increase its strength.

Unfortunately, there are four  modes of agreement and two are team poison.

  1. Yes: We agree! Let’s get to work!
  2. No: We disagree! Let’s figure out what’s different so that we’re stronger!
  3. Artificial Warfare:  We disagree!  While we are fundamentally aligned, everyone else thinks that the team does not have consensus and ignores the teams decisions.  We also waste a lot of time talking instead of acting.
  4. Artificial Harmony: We agree!  But then we don’t support each other in getting the work done or message alignment.  We never spend time talking about the real issues so we constantly have to redo our actions.

I’ve never seen a team that is as simple as agree/disagree but I’ve been at companies (Surgient) that tried to build a culture to support trust and conflict resolution (based on Lencioni’s excellent 5 dysfunctions book).  However, there’s a major gap between a team that needs to build trust through healthy conflict and one that wraps itself in the dysfunctions of artificial harmony and warfare.

If you find yourself on a team with this problem then you’ll need management by-in to fix it.  I have not seen it be a self-correcting problem.  I’d love to hear if you’ve gotten yourself healthy from a team with these issues.

Signs of artificial agreement syndrome include

  1. Lack of broad participation – discussions are dominated by a few voices
  2. Discussions that always seem to run to the meta topic instead of the actual problem
  3. Issues are not resolved and come up over and over
  4. People are still upset after the meeting because issues have not been resolved
  5. People have different versions of events
  6. Lack of trust for some people to speak for the group
  7. Outcomes of decision making meetings are surprises
  8. Lack of results or missed commitments by the team

Lean Process’ strength is being Honest and Humble

Lean process and methodology is important to me because I think it is central to the work that we are doing in the community.  Even more, it’s changing how my team at Dell creates and delivers products for customers?
This post may be long, but my answer to “why Lean” ends up being very simple: Lean process is honest and humble.
I believe Lean process is more honest because it assumes a lack of knowledge.  It’s more “truthy” to admit there are a lot of things that we don’t know (we can’t know!) until we’ve started doing the work.  It’s very hard to admit we don’t have answers for things until we are further along because we want to feel like  experts and we to lock deliveries.
The “building software is like building a house” analogy is often used to claim that Lean lacks the design “blue prints” that other processes have.  The argument goes that builders needed to understand how the entire house works with structural support, plumbing lines and electrical circuits and things like that.  However, if I was going to build a house I would still leave a lot of things to the last-minute.  The process of building a house evolves so that the basic outlines of structural elements are known. In a lot of cases the position of rooms, the outlets, air conditioning ducts, a lot of the functional components, even windows and doors while they are often placed in the design can easily be moved and changed as you go through.  You can do a walk-through of a house after it’s been framed out and make all sorts of changes and adjustments.  As things go forward in the design of the house things become more and more difficult to change. You are building a brick façade, moving the windows within the façade are very difficult. However, interior places they aren’t.  Likewise, I don’t want to order my life-gem counter-tops from the blue-prints – it’s much safer to order off actual measurements.
Software projects are also building projects. You build a façade, you build a structure and within that structure you have a lot of flexibility. As you go you make more decisions and your choices become more limited. But, that is the nature of building.  For that reason, saying “we don’t know everything we want” is not just good practice, it is much more honest.
But honesty is not enough for a strong Lean process.  The need for humility in Lean architects and business people really stands out.  The Lean process is humble because it starts with the assumption that we don’t really understand the value, drivers, interests and features that make our product special.
We need very strong ideas and a vision; however, we need to be motivated by making something that is significant to other people.  They are the ones who give it value.
We have to give up the idea that we can convince someone who our idea will be significant to them – we have to show and collaborate instead.  The most important thing in building any project and taking any product to market is listening to the people who are using your product and understanding what their needs.  Instead of telling them what they need;  show them something interesting, interact with them and get their opinion.
Contrast that to waterfall methodology where the assumption is that we can put smart people in a room, have them figure out what the requirements are, build a team, get everything ready to go and then start executing.  That assumption is highly optimized and seems very efficient, but it has a huge amount of hubris in the process.  The idea that we can sit down two years in advance of market need and identify what those features and capabilities are seems outrageous to me in the current technology market.  It is so much harder to try to get that information correct and then execute on it that get a directional statement and begin and then get feedback and interact, it is a world of difference between the two processes.
Ultimately, Lean process about having requirements that are less defined or well-known.  It’s driven by giving respect to the people consuming the product.  We can hear their ideas and their reactions.  Where the users’ input can be evaluated and taken in to account.  It’s about collaboration.
Humility it not just about listening and collecting feedback: it is about interacting and building relationships.
So just as our customers are building a relationship with our product, they are also building a relationship with the people creating that product. And that relationship is what drives the product forward and what makes it a great product and it is what gives you a strong and loyal customer base, rather than dictating, “This is what you wanted. Here it is. I hope you enjoy it.”
This is a completely different and powerful way of delivering product.  I believe that honesty and humility in a Lean process inherently creates stronger products and ones that are both faster delivered and better suited to their markets.

The Atlantic magazine explains why Lean process rocks (and saves companies $$)

GearsI’m certain that the Atlantic‘s Charles Fishman was not thinking software and DevOps when he wrote the excellent article about “The Insourcing Boom.”  However, I strongly recommend reading this report for anyone who is interested in a practical example of the inefficiencies of software lean process (If you are impatient, jump to page 2 and search for toaster).

It’s important to realize that this article is not about software! It’s an article about industrial manufacturing and the impact that lean process has when you are making stuff.  It’s about how US companies are using Lean to make domestic plants more profitable than Asian ones.  It turns out that how you make something really matters – you can’t really optimize the system if you treat major parts like a black box.

When I talk about Agile and Lean, I am talking about proven processes being applied broadly to companies that want to make profit selling stuff. That’s what this article is about

If you are making software then you are making stuff! Your install and deploy process is your assembly line. Your unreleased code is your inventory.

This article does a good job explaining the benefits of being close to your manufacturing (DevOps) and being flexible in deployment (Agile) and being connected to customers (Lean).  The software industry often acts like it’s inventing everything from scratch. When it comes to manufacturing processes, we can learn a lot from industry.

Unlike software, industry has real costs for scrap and lost inventory. Instead of thinking “old school” perhaps we should be thinking of it as the school of hard knocks.

My Dilemma with Folsom – why I want to jump to G

When your ship sailsThese views are my own.  Based on 1×1 discussions I’ve had in the OpenStack community, I am not alone.

If you’ve read my blog then you know I am a vocal and active supporter of OpenStack who is seeking re-election to the OpenStack Board.  I’m personally and professionally committed to the project’s success. And, I’m confident that OpenStack’s collaborative community approach is out innovating other clouds.

A vibrant project requires that we reflect honestly: we have an equal measure of challenges: shadow free fall Dev, API vs implementation, forking risk and others.  As someone helping users deploy OpenStack today, I find my self straddling between a solid release (Essex) and a innovative one (Grizzly). Frankly, I’m finding it very difficult to focus on Folsom.

Grizzly excites me and clearly I’m not alone.  Based on pace of development, I believe we saw a significant developer migration during feature freeze free fall.

In Grizzly, both Cinder and Quantum will have progressed to a point where they are ready for mainstream consumption. That means that OpenStack will have achieved the cloud API trifecta of compute-store-network.

  • Cinder will get beyond the “replace Nova Volume” feature set and expands the list of connectors.
  • Quantum will get to parity with Nova Network, addresses overlapping VM IPs and goes beyond L2 with L3 feature enablement like  load balancing aaS.
  • We are having a real dialog about upgrades while the code is still in progress
  • And new projects like Celio and Heat are poised to address real use problems in billing and application formation.

Everything I hear about Folsom deployment is positive with stable code and significant improvements; however, we’re too late to really influence operability at the code level because the Folsom release is done.  This is not a new dilemma.  As operators, we seem to be forever chasing the tail of the release.

The perpetual cycle of implementing deployment after release is futile, exhausting and demoralizing because we finish just in time for the spotlight to shift to the next release.

I don’t want to slow the pace of releases.  In Agile/Lean, we believe that if something is hard then we do should it more.  Instead, I am looking at Grizzly and seeing an opportunity to break the cycle.  I am looking at Folsom and thinking that most people will be OK with Essex for a little longer.

Maybe I’m a dreamer, but if we can close the deployment time gap then we accelerate adoption, innovation and happy hour.  If that means jilting Folsom at the release altar to elope with Grizzly then I can live with that.