Continuous Release combats disruptions of “Free Fall” development

Since I posted the “Free Fall” development post, I’ve been thinking a bit about the pros and cons of this type of off-release development.

The OpenStack Swift project does not do free fall because they are on a constant “ship ready” state for the project and only loosely flow the broader OpenStack release track.  My team at Dell also has minimal free fall development because we have a more frequent release clock and choose to have the team focus together through dev/integrate/harden cycles as much as possible.

From a Lean/Agile/CI perspective, I would work to avoid hidden development where possible.  New features are introduced by split test (they are in the code, but not active for most users) so that the all changes in incremental.  That means that refactoring, rearchitecture and new capabilities appear less disruptively.  While it may this approach appears to take more effort in the short term; my experience is that it accelerates delivery because we are less likely to over develop code.

Unfortunately, free fall development has the opposite effect.  Having code that appears in big blocks is contrary to best practices in my opinion.  Further, it rewards groups that work asynchronously and

While I think that OpenStack benefits from free fall work, I think that it is ultimately counter-productive.

Our Vision for Crowbar – taking steps towards closed loop operations

When Greg Althaus and I first proposed the project that would become Dell’s Crowbar, we had already learned first-hand that there was a significant gap in both the technologies and the processes for scale operations. Our team at Dell saw that the successful cloud data centers were treating their deployments as integrated systems (now called DevOps) in which configuration of many components where coordinated and orchestrated; however, these approaches feel short of the mark in our opinion. We wanted to create a truly integrated operational environment from the bare metal through the networking up to the applications and out to the operations tooling.

Our ultimate technical nirvana is to achieve closed-loop continuous deployments. We want to see applications that constantly optimize new code, deployment changes, quality, revenue and cost of operations. We could find parts but not a complete adequate foundation for this vision.

The business driver for Crowbar is system thinking around improved time to value and flexibility. While our technical vision is a long-term objective, we see very real short-term ROI. It does not matter if you are writing your own software or deploying applications; the faster you can move that code into production the sooner you get value from innovation. It is clear to us that the most successful technology companies have reorganized around speed to market and adapting to pace of change.

System flexibility & acceleration were key values when lean manufacturing revolution gave Dell a competitive advantage and it has proven even more critical in today’s dynamic technology innovation climate.

We hope that this post helps define a vision for Crowbar beyond the upcoming refactoring. We started the project with the idea that new tools meant we could take operations to a new level.

While that’s a great objective, we’re too pragmatic in delivery to rest on a broad objective. Let’s take a look at Crowbar’s concrete strengths and growth areas.

Key strength areas for Crowbar

  1. Late binding – hardware and network configuration is held until software configuration is known.  This is a huge system concept.
  2. Dynamic and Integrated Networking – means that we treat networking as a 1st class citizen for ops (sort of like software defined networking but integrated into the application)
  3. System Perspective – no Application is an island.  You can’t optimize just the deployment, you need to consider hardware, software, networking and operations all together.
  4. Bootstrapping (bare metal) – while not “rocket science” it takes a lot of careful effort to get this right in a way that is meaningful in a continuous operations environment.
  5. Open Source / Open Development / Modular Design – this problem is simply too complex to solve alone.  We need to get a much broader net of environments and thinking involved.

Continuing Areas of Leadership

  1. Open / Lean / Incremental Architecture – these are core aspects of our approach.  While we have a vision, we also are very open to ways that solve problems faster and more elegantly than we’d expected.
  2. Continuous deployment – we think the release cycles are getting faster and the only way to survive is the build change into the foundation of operations.
  3. Integrated networking – software defined networking is cool, but not enough.  We need to have semantics that link applications, networks and infrastructure together.
  4. Equilivent physical / virtual – we’re not saying that you won’t care if it’s physical or virtual (you should), we think that it should not impact your operations.
  5. Scale / Hybrid – the key element to hybrid is scale and to hybrid is scale.  The missing connection is being able to close the loop.
  6. Closed loop deployment – seeking load management, code quality, profit, and cost of operations as factor in managed operations.

Seven Cloud Success Criteria to consider before you pick a platform

From my desk at Dell, I have a unique perspective.   In addition to a constant stream of deep customer interactions about our many cloud solutions (even going back pre-OpenStack to Joyent & Eucalyptus), I have been an active advocate for OpenStack, involved in many discussions with and about CloudStack and regularly talk shop with Dell’s VIS Creator (our enterprise focused virtualization products) teams.  And, if you go back ten years to 2002, patented the concept of hybrid clouds with Dave McCrory.

Rather than offering opinions in the Cloud v. Cloud fray, I’m suggesting that cloud success means taking a system view.

Platform choice is only part of the decision: operational readiness, application types and organization culture are critical foundations before platform.

Over the last two years at Dell, I found seven points outweigh customers’ choice of platform.

  1. Running clouds requires building operational expertise both at the application and infrastructure layers.  CloudOps is real.
  2. Application architectures matter for cloud deployment because they can redefine the SLA requirements and API expectations
  3. Development community and collaboration is a significant value because sharing around open operations offers significant returns.
  4. We need to build an accelerating pace of innovation into our core operating principles
  5. There are still significant technology gaps to fill (networking & storage) and we will discover new gaps as we go
  6. We can no longer discuss public and private clouds as distinct concepts.   True hybrid clouds are not here yet, but everyone can already see their massive shadow.
  7. There is always more than one right technological answer.  Avoid analysis paralysis by making incrementally correct decisions (committing, moving forward, learning and then re-evaluating).

Four alternatives to Process Interlock

Note: This is the third and final part of 3 part series about the “process interlock dilemma.”

In post 1, I’ve spelled out how evil Process Interlock causes well intentioned managers to add schedule risk and opportunity cost even as they appear to be doing the right thing. In post 2, I offered some alternative outcomes when process interlock is avoided. In this post, I attempt to provide alternatives to the allure of process interlock. We must have substitute interlocks types to replace our de facto standard because there are strong behavioral and traditional reasons to keep broken processes. In other words, process Interlock feels good because it gives you the illusion that your solution is needed and vital to other projects.

If your product is vital to another team then they should be able to leverage what you have, not what you’re planning to have.

We should focus on delivered code instead of future promises. I am not saying that roadmaps and projections are bad – I think they are essential. I am saying that roadmaps should be viewed as potential not as promises.

  1. No future commits (No interlock)

    The simplest way to operate without any process interlock is to never depend on other groups for future deliveries. This approach is best for projects that need to move quickly and have no tolerance for schedule risk. This means that your project is constrained to use the “as delivered” work product from all external groups. Depending on needs, you may further refine this as only rely on stable and released work.

    For example, OpenStack Cactus relied on features that were available in the interim 10.10 Ubuntu version. This allowed the project to advance faster, but also limited support because the OS this version was not a long term support (LTS) release.

  2. Smaller delivery steps (MVP interlock)

    Sometimes a new project really needs emerging capabilities from another project. In those cases, the best strategy is to identify a minimum viable feature set (or “product”) that needs to be delivered from the other project. The MVP needs to be a true minimum feature set – one that’s just enough to prove that the integration will work. Once the MVP has been proven, a much clearer understanding of the requirements will help determine the required amount of interlock. My objective with an MVP interlock is to find the true requirements because IMHO many integrations are significantly over specified.

    For example, the OpenStack Quantum project (really, any incubated OpenStack projects) focuses on delivering the core functionality first so that the ecosystem and other projects can start using it as soon as possible.

  3. Collaborative development (Shared interlock)

    A collaborative interlock is very productive when the need for integration is truly deep and complex. In this scenario, the teams share membership or code bases so that the needs of each team is represented in real time. This type of transparency exposes real requirements and schedule risk very quickly. It also allows dependent teams to contribute resources that accelerate delivery.

    For example, our Crowbar OpenStack team used this type of interlock with the Rackspace OpenStack team to ensure that we could get Diablo code delivered and deployed as fast as possible.

  4. Collaborative requirements (Fractal interlock)

    If you can’t collaborate or negotiate an MVP then you’re forced into working at the requirements level instead of development collaboration. You can think of this as a sprint-roadmap fast follow strategy because the interlocked teams are mutually evolving design requirements.

    I call this approach Fractal because you start at big concepts (road maps) and drill down to more and more detail (sprints) as the monitored project progresses. In this model, you interlock on a general capability initially and then work to refine the delivery as you learn more. The goal is to avoid starting delays or injecting false requirements that slow delivery.

    For example, if you had a product that required power from hamsters running in wheels then you’d start saying that you needed a small fast running animal. Over the next few sprints, you’d likely refine that down to four legged mammals and then to short tailed high energy rodents. Issues like nocturnal or bites operators could be addressed by the Hamster team or by the Wheel team as the issues arose. It could turn out that the right target (a red bull sipping gecko) surfaces during short tail rodent design review. My point is that you can avoid interlocks by allowing scope to evolve.

Breaking Process Interlocks delivers significant ROI

I have been trying to untangle both the cause and solution of process interlock for a long time. My team at Dell has an interlock-averse culture and it accelerates our work delivery. I write about this topic because I have real world experience that eliminating process interlocks increases

  1. team velocity
  2. collaboration
  3. quality
  4. return on investment

These are significant values that justify adoption of these non-interlock approachs; however, I have a more selfish motivation.

We want to work with other teams that are interlock-averse because the impacts multiply. Our team is slowed when others attempt to process interlock and accelerated when we are approached in the ways I list above.

I suspect that this topic deserves a book rather than a three part blog series and, perhaps, I will ultimately create one. Until then, I welcome your comments, suggestions and war stories.

How Good beats Great and avoids Process Interlock failure

Note: This is part 2 of a 3 part series about the “process interlock dilemma.”

This post addresses how to solve the Process Interlock dilemma I identified in part 1. It is critical to understand the failure of Process Interlock comes because the interlocks turn assumptions into facts. We must accept that any forward looking schedule is a guess. If your guesses are accurate then your schedule should be accurate. That type of insight and $5 will get you a Venti Carmel Frappuccino.

The problem of predicting the future and promising to deliver on that schedule results in one of two poor outcomes.

  1. The better poor outcome is that you are accurate and committed to a schedule.

    To keep on the schedule, you must focus on the committed deliverables. While this sounds ideal, there an opportunity cost to staying focused. Opportunity cost means that while your team is busy delivering on schedule, it is not doing work to pursue other opportunities. In a perfect world, your team picked the most profitable option before it committed the schedule. If you don’t live in a perfect world then it’s likely that while you are working on deliver you’ve learned about another opportunity. You may make your schedule but miss a more lucrative opportunity.

  2. The worse poor outcome is that you are not accurate and committed to a schedule.

    In that case, you miss both the opportunity you thought you had and the ones that you could not pursue while staying dedicated to your planning assumptions.

Let’s go back to our G.Mordler example and look at some better outcomes:

The “we’re going to try outcome.”

The Trans Ma’am team, Alpha, Omega and the supplier all get together and realize that the current design is not shippable; however, they realize that each team’s roadmaps converge within target time. To reduce interlocks, Omega takes Alpha in the low power form and begins integration. During integration, Omega identifies that Alpha can produce sufficient power for short periods of time travel but causes the exhaust vent of the power module to melt. Alpha determines that a change to the cooling system will address the problem. In consulting with their supplier, Alpha asks them to stop design on the new supply and adjust the current design as needed. The resulting time drive does not meet GM’s initial design for 4 hour time jumps, but is sufficient for lead footed mommies to retroactively avoid speeding tickets. GM decides it can still market the limited design.

The “we’re not ready outcome.

The Trans Ma’am team, Alpha and Omega all get together and realize that the current design is not shippable in their current state. While they cannot commit each realizes that there is a different market for their products: Alpha pursues dog poop power generation for high rises condo towers (aka brown energy) and Omega finds military applications for time travel nuclear submarines. In the experience gained from delivering products to these markets, Alpha improves power delivery by 20% and Omega improves efficiency by 20%. These modest mutual improvements allow Alpha to meet Omega’s requirement. While the combined product is too late for the target date, GM is able to incorporate the design into next design cycle.

While neither outcome delivers the desired feature at the original schedule, both provide better ROI for the company. One of the most common problems with process interlock is that we lost sight of ROI in our desire to meet an impractical objective.

Process interlock is a classic case of point optimization driving down system-wide performance.

If you’re interested in this effect, I recommend reading Eli Goldratt’s The Goal.

In the part, I’ve discussed some ways to escape from Process Interlock. I’ll talk about four alternative approaches in part 3 (to be published 3/16).

The Process Interlock Dilemma – where Roadmaps get lost and why Waterfalls suck

Note: This is part 1 of a 3 part series. I have been working on this series for nearly six months in an attempt to make this subtle but extremely expensive problem understandable. Rather than continue to polish the posts, I will post series for your enjoyment. I hope that it is enlightening, humorous or (ideally) both. Comments are welcome!

I’ve been struggling to explain a subtle process fail that occurs every day at my company (Dell) and also at every company I’ve ever worked with or for. I call this demon “Process Interlock” and it is the invisible bane of projects big and small. It manifests by forcing well-meaning product managers and engineering directors to make trade-offs that they know are wrong because of schedule commitments. It means that product quality consistently drops to the bottom of the list in favor of getting in that one promised feature. It shows up when customers get products late because of prospect who decided not to buy demanded a feature a year ago. These are the symptoms of the process interlock dilemma.

Process Interlock occurs when another team depends on your team for a future feature.

That sounds pretty innocuous right? It makes sense that other teams, customers and partners should be able to ask you about your roadmap and then build your delivery schedule into their plans. That is the perfectly logical request that happens inside my group every single day. Unfortunately, that exact commitment is what creates the problem because it locks your team’s velocity into the future and eliminates agility.

Note: I was reading chapter 11 in Eric Ries’ Lean Startup as was surprised to find him making very similar arguments but from a different perspective.

To hopefully help explain, I’m inventing a hypothetical project from the car division of the G.Mordler company. GM plans to add time travel as an option for their 2016 product line. They believe that there is a big market in minivan’s that can solve the proverbial “are we there yet problem” by simply skipping over the boring part of the trip. The trans-dimensional mommy mobile (or Trans Ma’am) will be part of a refresh of their 2014 model. The addition of a time circuit and power generator developed two internal divisions, Alpha and Omega, support a critical marketing event for the company so timing is important.

Let’s examine four outcomes of how these two divisions turn their assumed schedules into rigidly locked conundrum.

Scenario 0: Ideal Case.

Alpha makes the fusion power supply and Omega is making the time circuits. Based on experimental data, Omega’s design calls for 3.14 Gigawatts to operate their time capacitor; however, Alpha’s available design is limited to 0.73 Gigawatts. Alpha expects to reach 3.5 Gigawatts in 9 months when their supplier releases an updated nitrogen cooled super conductor. Based on that commitment, Omega has enough information to make an informed decision about their timeline. Since Alpha commits to deliver in 12 months (9 for the new part + 3 for development), Omega expects to deliver a working time circuit in 20 months (12 for the supply + 8 for development). In this example, there are 3 levels of Process Interlock: Alpha interlocks with the supplier and then Omega interlocks with Alpha. From a PERT schedule perspective, the world is now under control! It’s a brand new day and the birds are singing…

Scenario 1: Meet Schedule w/ Added Cost

Unfortunately, we now have a highly interlocked schedule. In the best case scenario (the one where we meet the schedule), Alpha has just signed up to meet an aggressive delivery timeframe. They have to put heavy pressure on the supplier to deliver their part which causes the supplier to increase the price for the cooler component. When their product manager identifies available alternative markets (such as power generating pet waste incineration), they are not able to purse the opportunities because they cannot risk the schedule impact of redirecting engineers. Meanwhile, Omega understands that a critical part is missing for 12 months and decides to reduce staffing while waiting for the needed part. In the process, they lose a key engineer who could have optimized the manufacturing process to half the production defect rate. Overall, the project meets schedule but at added cost, reduced quality and missed opportunities. This happened because the interlocks eliminated flexibility in the schedule for upstream and downstream participants. GM meets the launch window for the Trans Ma’am but high costs for the upgrade limit sales.

Scenario 2: Meet Schedule w/ Lost Features

A more likely “on schedule” alternative is that Alpha’s supplier cuts some corners to meet the aggressive deadline; consequently, power generation for Alpha is not reliable. This issue is not revealed by load testing in Alpha’s labs or short time travel testing by Omega. Instead, the faulty generators fail in integration field testing accidentally sending a DOT test driver home during rush hour traffic. Fixing the problem requires a redesign of the power plant. The new design does not fit into space allowed by the Trans Ma’am design team causing the entire program, while delivered “on time,” to be considered a failure and not shipped. GM misses the launch window for the Trans Ma’am.

Scenario 3:
Miss Schedule

In the most likely scenario the project is late. The schedule for Alpha slips because supplier requires an extra three months to meet the Alpha’s specs. In a common turn of fate, the supplier’s specs would be sufficient for Alpha to proceed; however, Alpha’s risk manager bumped up the cooling requirements by 20% in order to ensure they had wiggle room in their own design. Because of the supplier contract requiring delivery per spec, the supplier could not ship a workable but contractually unacceptable product. Since the part is delayed, Alpha has to slip the schedule to Omega. Compounding the problem, Alpha’s manager is optimistic that it will work out and does not alert Omega until 2 weeks before the deadline. Omega, who has been testing their circuits using liquid sodium cooled nuclear fission power plants, attempts to make up the schedule delay by imposing 20 hour Mountain Dew fueled work days. The aggressive schedule results in quality issues for the time circuits so that they can only be used during Mountain-time rebroadcasts of Seinfeld. After an unsuccessful bid to purchase the Denver cable TV station KDEV, GM misses the launch window for the Trans Ma’am.

I realize these examples are complicated, but I hope they humorously illuminate the problem.

In part 2, I’ll show an alternate approach for GM that addresses the process interlock.

Post Script

Of course, for this example, the entire project plan is a moot point since we’re talking about time machines! I’m offering two likely endings for the scenarios above:

The Pragmatists’ Ending: Once the project is finally complete, the manager simply drives the car back to the beginning of the project. Over white Russian martinis and sushi, her future self explains how the painful delivery schedule cost her the best years of her life causing her to quit. Her replacement cannot maintain funding for the project so it is eventually scraped by G.Mordler six months before the working pieces can be assembled.

The Realists’ Ending: Once the project is finally complete, the manager simply drives the car back to the beginning of the project. Over lemonade vodka tonics and tapas, her future self provides a USB stick with the critical design data needed to complete the project on time and budget. When she examines the data, the resulting time paradox creates a rift in the Einstein-Jacob space-time fabric thus ending the universe.

Substituting Action for Knowledge – adopting “ready, fire, aim” as a strategy (and when to run like hell)

Today my mother-in-law (a practicing psychiatrist) was bemoaning the current medical practice of substituting action for knowledge. In her world, many doctors will make rapid changes to their patients’ therapy. Their goal is to address the issues immediately presented (patient feels sad so Dr prescribes antidepressants) rather than taking time to understand the patients’ history or make changes incrementally and measure impacts. It feels like another example of our cultural compulsion to fix problems as quickly as possible.

Her comments made me question the core way that I evangelize!

Do Lean and Agile substitute action for knowledge? No. We use action to acquire knowledge.

The fundamental assumption that drives poor decision-making is that we have enough information to make a design, solve a problem or define a market. Lean and Agile’s more core tenet is that we must attack this assumption. We must assume that we can’t gather enough information to fully define our objective. The good news, is that even without much analysis we know a lot! We know:

  • roughly what we want to do (road map)
  • the first steps we should take (tactics)
  • who will be working on the problem (team members)
  • generally how much effort it will take (time & team size)
  • who has the problem that we are trying to solve (market)

We also know that we’ll learn a lot more as we get closer to our target. Every delay in starting effectively pushed our “day of clarity” further into the future. For that reason, it is essential that we build a process that constantly reviews and adjusts its targets.

We need to build a process that acquires knowledge as progress is made and makes rapid progress.

In Agile, we translate this need into the decorations of our process: reviews for learning, retrospectives for adjustments, planning for taking action and short iterations to drive the feedback loop.  Agile’s mantra is “ready, fire, aim, fire, aim, fire, aim, …” which is very different from simply jumping out of a plane without a parachute and hoping you’ll find a haystack to land in.

For cloud deployments, this means building operational knowledge in stages.  Technology is simply evolving too quickly and best practices too slowly for anyone to wait for a packaged solution to solve all their cloud infrastructure problems.  We tried this and it does not work: clouds are a mixture hardware, software and operations.  More accurately, clouds are an operational model supported by hardware and software.

Currently, 80% of cloud deployment effort is operations (or “DevOps“).

When I listen to people’s plans about building product or deploying cloud, I get very skeptical when they take a lot of time to aim at objects far off on the horizon.  Perhaps they are worried that they will substitute action for knowledge; however, I think they would be better served to test their knowledge with a little action.

My MIL agrees – she sees her patients frequently and makes small adjustments to their treatment as needed.  Wow, that’s an Rx for Agile!

Dell to spin bare iron into OpenStack gold

I’m at the CloudConnect conference today supporting my team’s initial OpenStack foray.   Our announcement part of the Rackspace Cloud Builders announcement.

Tonight (3/8), we’re at the Rackspace Launch with a pony rack of servers (6 nodes) where we will run a LIVE DEMO of our cloud installer (codename “Crowbar”).  The initial offer includes my hyperscale white paper and our cloud foundation kit.

Interested in the details?  Here are background posts that talk about the Lean/Agile process we use, what is Crowbar, and my write up about hyperscale (“flat edge”) data centers.

Added 3/9: Links to articles about the release:

Here’s what Dell is saying about OpenStack on Dell.com/openstack:

Dell is one of the original partners in the OpenStack community, which has now grown to more than 50 companies and participants. To accelerate adoption of this powerful platform, Dell has worked to develop an effortless out-of-box OpenStack experience with:
  • Optimized PowerEdge™ C-based hardware configurations
  • A technical whitepaper that details the design of an OpenStack hyperscale cloud on PowerEdge C server technology
  • An OpenStack installer that allows bare metal deployment of OpenStack clouds in a few hours (vs. a manual installation period of several days)

Read more about the steps to design an OpenStack hyperscale cloud in a Dell technical whitepaper entitled “Bootstrapping OpenStack Clouds.”

Interested?  Contact OpenStack@Dell.com.

The Go-Fasterer OpenStack Cloud Strategy

Dell’s OpenStack strategy (besides being interesting by itself) brings together Agile and Lean approaches and serves as a good illustration of the difference between the two approaches.

Before I can start the illustration, I need to explain the strategy clearly enough that the discussion makes sense.   Of course, my group is selling these systems so the strategy starts a sales pitch.  Bear with me, this is a long post and I promise we’ll get to the process parts as fast as possible.

Dell’s OpenStack strategy is to enter the market with the smallest possible working cloud infrastructure practical.  We have focused maniacally on eliminating all barriers and delays for customers’ evaluation processes.  Our targets are early adopters who want to invest in a real, hands-on OpenStack evaluation and understand they will have to work to figure out OpenStack.   White gloves, silver spoons and expensive licensed applications are not included in this offering.

We are delivering a cloud foundation kit: 7u hardware setup (6 nodes+switch), white paper, installer, and a dollop of consulting services.  It is a very small foot print system with very little integration.  The most notable deliverable is our target of going from boxes to working cloud in less than 4 hours (I was calling this “nuts to soup before lunch” but marketing didn’t bite).

Enough background?  Let’s talk about business process!

From this point on, our product offering is just an example.   You should imagine your product or service in these descriptions.  You should think about the internal reconfiguration required needed to bring your product or service to market in the way I am describing.

There are two critical elements in the go-fasterer strategy:

  1. a very limited “lean” product and
  2. a very fast “agile” installation process.

The offering challenges the de facto definition of solutions as being complete packages bursting with features, prescriptive processes, licensed companion products and armies of consultants.  While Dell will eventually have a solution that meets (or exceeds) these criteria; our team did not think we should wait until we had all those components before we begin engaging customers.

Our first offering is not for everyone by design.  It is highly targeted to early adopters who have specific needs (desire to move quickly) that outweigh all other feature requirements.  They are willing to invest in a less complete product because to core alone solves an important problem.

The concept of stripping back your product to the very core is the essence of Lean process.  Along this line of thinking, maintaining ship readiness is the primary mantra – if you can’t sell your product then your entire company’s existence is at risk.  I like the way the Poppendieck ‘s describe it:  you should consider product features as perishable inventory.  If we were selling fruit salad and you had bananas and apples but no cherries then it makes sense to sell apple/banana medley while you work on the cherries.

Whittling back a product to the truly smallest possible feature set is very threatening and difficult.  It forces teams to take risks and guesses that leave you with a product that many customers will reject.  Let me repeat that: you’re objective is to create a product that many customers will reject.  You must do this because it:

  1. gets into the market much faster for some customers (earning $ is wonderfully clarifying)
  2. learn immediately what’s missing (fewer future guesses)
  3. learn immediately what’s important to customers (less risk)
  4. builds credibility that you are delivering something (you’re building relationships)

Ironically, while lean approaches exist to reduce risk and guesswork; they will feel very risky and like gambling to organizations used to traditional processes.   This is not surprising because our objective is to go faster so initially we will be uncomfortable that we have enough information to make decisions.

The best cure for lack of information is not more analysis!  The cure is interacting with customers.

Lean says that you need product if you want to interact meaningfully with customers.  This is because customers (even those who are not buying right away) will take you more seriously if you’ve got a product.  Talking about products that you are going to release is like talking about the person you wanted to take to prom but never asked.

To achieve product early, you need to find the true minimum product set.  This is not the smallest comfortable set.  It is the set that is so small, so uncomfortable, so stripped down that it seems to barely do anything at all.

In our case, we considered it sufficient if the current OpenStack release could be reliably and quickly installed on Dell hardware.  We believe there are early adopter customers who want to evaluate OpenStack right away and their primary concern starting their pilot and marketing towards eventually deployment.

Mixing Agile into Lean is needed to make the “skinny down” discipline practical and repeatable.

Agile brings in a few critical disciplines to enable Lean:

  1. Prioritized roadmaps help keep teams focused on what’s needed first but don’t lose sight of longer term plans.
  2. Predictable pace of delivery allows committed interactions with customers that give timelines for fixing issues or adding capabilities.
  3. Working out of order keeps the great from being the enemy of the good so that we delay field testing while we solve imagined problems.
  4. Focus on quality / automation / repeatability reduces paying for technical debt internally and time firefighting careless defects when a product is “in the wild” with customers.
  5. Insistence on installable “ship ready” product ensures that product gets into the field whenever the right customer is found.  Note: this does not mean any customer.  Selling to the wrong customer can be deadly too, but that’s a different topic.
  6. Feedback driven iterations ensures that Lean engagements with customers are interactive and inform development.

These disciplines are important for any organization but vital when you go Lean.  To take your product early and aggressively to market, you must have confidence that you can continue to deliver after your customers get a taste of the product.

You cannot succeed with Lean if you cannot quickly evolve your initial offering.

The enabling compromise with Lean is that you will keep the train running with incremental improvements:  Lean fails if you engage customers early then disappear back into a long delivery cycle.  That means committing to an Agile product delivery cycle if you want Lean (note: the reverse not true)

I think of Lean and Agile as two sides of the same results driven coin: Lean faces towards the customer and market while Agile faces internally to engineering.

Please let me know how your team is trying to accelerate product delivery.

Note: of course, you’re also welcome to contact me if you’re interested in being an early adopter for our OpenStack foundation kit.