Tweaking DefCore to subdivide OpenStack platform (proposal for review)

The following material will be a major part of the discussion for The OpenStack Board meeting on Monday 10/20.  Comments and suggest welcome!

OpenStack in PartsFor nearly two years, the OpenStack Board has been moving towards creating a common platform definition that can help drive interoperability.  At the last meeting, the Board paused to further review one of the core tenants of the DefCore process (Item #3: Core definition can be applied equally to all usage models).

Outside of my role as DefCore chair, I see the OpenStack community asking itself an existential question: “are we one platform or a suite of projects?”  I’m having trouble believing “we are both” is an acceptable answer.

During the post-meeting review, Mark Collier drafted a Foundation supported recommendation that basically creates an additional core tier without changing the fundamental capabilities & designated code concepts.  This proposal has been reviewed by the DefCore committee (but not formally approved in a meeting).

The original DefCore proposed capabilities set becomes the “platform” level while capability subsets are called “components.”  We are considering two initial components, Compute & Object, and both are included in the platform (see illustration below).  The approach leaves the door open for new core component to exist both under and outside of the platform umbrella.

In the proposal, OpenStack vendors who meet either component or platform requirements can qualify for the “OpenStack Powered” logo; however, vendors using the only a component (instead of the full platform) will have more restrictive marks and limitations about how they can use the term OpenStack.

This approach addresses the “is Swift required?” question.  For platform, Swift capabilities will be required; however, vendors will be able to implement the Compute component without Swift and implement the Object component without Nova/Glance/Cinder.

It’s important to note that there is only one yard stick for components or the platform: the capabilities groups and designed code defined by the DefCore process.  From that perspective, OpenStack is one consistent thing.  This change allows vendors to choose sub-components if that serves their business objectives.

It’s up to the community to prove the platform value of all those sub-components working together.

OpenStack Goldilocks’ Syndrome: three questions to help us find our bearings

Goldilocks Atlas

Action: Please join Stefano. Allison, Sean and me in Paris on Monday, November 3rd, in the afternoon (schedule link)

If wishes were fishes, OpenStack’s rapid developer and user rise would include graceful process and commercial transitions too.  As a Foundation board member, it’s my responsibility to help ensure that we’re building a sustainable ecosystem for the project.  That’s a Goldilock’s challenge because adding either too much or too little controls and process will harm the project.

In discussions with the community, that challenge seems to breaks down into three key questions:

After last summit, a few of us started a dialog around Hidden Influencers that helps to frame these questions in an actionable way.  Now, it’s time for us to come together and talk in Paris in the hallways and specifically on Monday, November 3rd, in the afternoon (schedule link).   From there, we’ll figure out about next steps using these three questions as a baseline.

If you’ve got opinions about these questions, don’t wait for Paris!  I’d love to start the discussion here in the comments, on twitter (@zehicle), by phone, with email or via carrier pidgins.

Need a physical ops baseline? Crowbar continues to uniquely fill gap

Robots Everywhere!I’ve been watching to see if other open “bare metal” projects would morph to match the system-level capabilities that we proved in Crowbar v1 and honed in the re-architecture of OpenCrowbar.  The answer appears to be that Crowbar simply takes a broader approach to solving the physical ops repeatably problem.

Crowbar Architect Victor Lowther says “What makes Crowbar a better tool than Cobbler, Razor, or Foreman is that Crowbar has an orchestration engine that can be used to safely and repeatably deploy complex workloads across large numbers of machines. This is different from (and better than, IMO) just being able to hand responsibility off to Chef/Puppet/Salt, because we can manage the entire lifecycle of a machine where Cobbler, Razor and Chef cannot, we can describe how we want workloads configured at a more abstract level than Foreman can, and we do it all using the same API and UI.”

Since we started with a vision of an integrated system to address the “apply-rinse-repeat” cycle; it’s no surprise that Crowbar remains the only open platform that’s managed to crack the complete physical deployment life-cycle.

The Crowbar team realized that it’s not just about automation setting values: physical ops requires orchestration to make sure the values are set in the correct sequence on the appropriate control surface including DNS, DHCP, PXE, Monitoring, et cetera.  Unlike architectures for aaS platforms, the heterogeneous nature of the physical control planes requires a different approach.

We’ve seen that making more and more complex kickstart scripts or golden images is not a sustainable solution.  There is simply too much hardware variation and dependency thrash for operators to collaborate with those tools.  Instead, we’ve found that decomposing the provisioning operations into functional layers with orchestration is much more multi-site repeatable.

Accepting that physical ops (discovered infrastructure) is fundamentally different from cloud ops (created infrastructure) has been critical to architecting platforms that were resilient enough for the heterogeneous infrastructure of data centers.

If we want to start cleaning up physical ops, we need to stop looking at operating system provisioning in isolation and start looking at the full server bring up as just a part of a broader system operation that includes networking, management and operational integration.

Apply, Rinse, Repeat! How do I get that DevOps conditioner out of my hair?

I’ve been trying to explain the pain Tao of physical ops in a way that’s accessible to people without scale ops experience.   It comes down to a yin-yang of two elements: exploding complexity and iterative learning.

Science = Explosions!Exploding complexity is pretty easy to grasp when we stack up the number of control elements inside a single server (OS RAID, 2 SSD cache levels, 20 disk JBOD, and UEFI oh dear), the networks that server is connected to, the multi-layer applications installed on the servers, and the change rate of those applications.  Multiply that times 100s of servers and we’ve got a problem of unbounded scope even before I throw in SDN overlays.

But that’s not the real challenge!  The bigger problem is that it’s impossible to design for all those parameters in advance.

When my team started doing scale installs 5 years ago, we assumed we could ship a preconfigured system.  After a year of trying, we accepted the reality that it’s impossible to plan out a scale deployment; instead, we had to embrace a change tolerant approach that I’ve started calling “Apply, Rinse, Repeat.”

Using Crowbar to embrace the in-field nature of design, we discovered a recurring pattern of installs: we always performed at least three full cycle installs to get to ready state during every deployment.

  1. The first cycle was completely generic to provide a working baseline and validate the physical environment.
  2. The second cycle attempted to integrate to the operational environment and helped identify gaps and needed changes.
  3. The third cycle could usually interconnect with the environment and generally exposed new requirements in the external environment
  4. The subsequent cycles represented additional tuning, patches or redesigns that could only be realized after load was applied to the system in situ.

Every time we tried to shortcut the Apply-Rinse-Repeat cycle, it actually made the total installation longer!  Ultimately, we accepted that the only defense was to focus on reducing A-R-R cycle time so that we could spend more time learning before the next cycle started.

Three critical ingredients for digital age relationships. [Collaborate Series 8/8]

Translation: Are you ready to apply these lessons?

This post is the final post in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

End of LineDuring this blog series, we’ve explored how important culture is in the work place.  The high tech areas are especially sensitive because they disproportionately embrace the millennial culture which often causes conflicts.

Our world has changed, driven by technology, new thinking, and new methodologies yet we may be using 20th century management techniques on 21st century customers and workers. There is an old business axiom that states, “If you can’t measure it, you can’t manage it.”  Yet how much of our process, interaction, successes, and failures never wind up on a spreadsheet, yet impact it?

Customers don’t leave bad companies; they leave companies that miss the mark when it comes to customer engagement. To better serve our customers we need to understand and adapt to the psychology of a new customer … one who has been trained to work as a Digital Native.

What would that look like? Tech people who interact with patience, collaboration, deep knowledge, and an openness to input, adapting to a customer’s needs in real-time. Wouldn’t that create a relationship that is second to none and unbreakable? Wouldn’t that be a leg up on the competition?

By understanding that new business culture has been influenced by the gaming experience, we have a deeper understanding of what is important to our customer base. And like a video game, if you cling to hierarchy, you lose. If you get caught up in linear time management, you lose. If you cling to bottlenecks and tradition you lose.

Three key takeaways: speed, adaptation, and collaboration

Those three words sum up today’s business environment. By now, you should not be surprised that those drivers are skills honed in video games.

We’ve explored the radically different ways that Digital Natives approach business opportunities. As the emerging leaders of the technological world, we must shift our operations to be more open, collaborative, iterative, and experience based.

Rob challenges you to get involved in his and other collaborative open source projects. Brad challenges you to try new leadership styles that engage with the Cloud Generation. Together, we challenge our entire industry to embrace a new paradigm that redefines how we interact and innovate. We may as well embrace it because it is the paradigm that we’ve already trained the rising generation or workers to intuitively understand.

What’s next?

Brad and Rob collaborated on this series with the idea of extending the concepts beyond a discussion of the “digital divide” and really looking at how culture impacts business leadership.  Lately, we’ve witnessed that the digital divide is not about your birthday alone.  We’ve seen that age alone does not drive the all cultural differences we’ve described here.  Our next posts will reflect the foundations for different ways that we’ve seen people respond to each other with a focus on answering “can digital age workers deliver?”

Like the conclusion?  Reading the rest of the series! 1: Intro > 2: ToC > 3: Video Reality > 4: Authority > 5: On The Game Training > 6: Win by Failing > 7: Go Digital Native > 8: Three Takeaways

 

To improve flow, we must view OpenStack community as a Software Factory

This post was sparked by a conversation at OpenStack Atlanta between OpenStack Foundation board members Todd Moore (IBM) and Rob Hirschfeld (Dell/Community).  We share a background in industrial and software process and felt that sharing lean manufacturing translates directly to helping face OpenStack challenges.

While OpenStack has done an amazing job of growing contributors, scale has caused our code flow processes to be bottlenecked at the review stage.  This blocks flow throughout the entire system and presents a significant risk to both stability and feature addition.  Flow failures can ultimately lead to vendor forking.

Fundamentally, Todd and I felt that OpenStack needs to address system flows to build an integrated product.  The post expands on the “hidden influencers” issue and adds an additional challenge because improving flow requires that the community influences better understands the need to optimize work inter-project in a more systematic way.

Let’s start by visualizing the “OpenStack Factory”

Factory Floor

Factory Floor from Alpha Industries Wikipedia page

Imagine all of OpenStack’s 1000s of developers working together in a single giant start-up warehouse.  Each project in its own floor area with appropriate fooz tables, break areas and coffee bars.  It’s easy to visualize clusters of intent developers talking around tables or coding in dark corners while PTLs and TC members dash between groups coordinating work.

Expand the visualization so that we can actually see the code flowing between teams as little colored boxes.  Giving project has a unique color allows us to quickly see dependencies between teams.  Some features are piled up waiting for review inside teams while others are waiting on pallets between projects waiting on needed cross features have not completed.  At release time, we’d be able to see PTLs sorting through stacks of completed boxes to pick which ones were ready to ship.

Watching a factory floor from above is a humbling experience and a key feature of systems thinking enlightenment in both The Phoenix Project and The Goal.  It’s very easy to be caught up in a single project (local optimization) and miss the broader system implications of local choices.

There is a large body of work about Lean Process for Manufacturing

You’ve already visualized OpenStack code creation as a manufacturing floor: it’s a small step to accept that we can use the same proven processes for software and physical manufacturing.

As features move between teams (work centers), it becomes obvious that we’ve created a very highly interlocked sequence of component steps needed to deliver product; unfortunately, we have minimal coordination between the owners of the work centers.  If a feature is needs a critical resource (think programmer) to progress then we rely on the resource to allocate time to the work.  Since that person’s manager may not agree to the priority, we have a conflict between system flow and individual optimization.

That conflict destroys flow in the system.

The number #1 lesson from lean manufacturing is that putting individual optimization over system optimization reduces throughput.  Since our product and people managers are often competitors, we need to work doubly hard to address system concerns.  Worse yet our inventory of work in process and the interdependencies between projects is harder to discern.  Unlike the manufacturing floor, our developers and project leads cannot look down upon it and see the physical work as it progresses from station to station in one single holistic view.  The bottlenecks that throttle the OpenStack workflow are harder to see but we can find them, as can be demonstrated later in this post.

Until we can engage the resource owners in balancing system flow, OpenStack’s throughput will decline as we add resources.  This same principle is at play in the famous aphorism: adding developers makes a late project later.

Is there a solution?

There are lessons from Lean Manufacturing that can be applied

  1. Make quality a priority (expand tests from function to integration)
  2. Ensure integration from station to station (prioritize working together over features)
  3. Make sure that owners of work are coordinating (expose hidden influencers)
  4. Find and mange from the bottleneck (classic Lean says find the bottleneck and improve that)
  5. Create and monitor a system view
  6. Have everyone value finished product, not workstation output

Added Subscript: I highly recommend reading Daniel Berrange’s email about this.

VMware Integrated OpenStack (VIO) is smart move, it’s like using a Volvo to tow your ski boat

I’m impressed with VMware’s VIO (beta) play and believe it will have a meaningful positive impact in the OpenStack ecosystem.  In the short-term, it paradoxically both helps enterprises stay on VMware and accelerates adoption of OpenStack.  The long term benefit to VMware is less clear.

From VWVortex

Sure, you can use a Volvo to tow a boat

Why do I think it’s good tactics?  Let’s explore an analogy….

My kids think owning a boat will be super fun with images of ski parties and lazy days drifting at anchor with PG13 umbrella drinks; however, I’ve got concerns about maintenance, cost and how much we’d really use it.  The problem is not the boat: it’s all of the stuff that goes along with ownership.  In addition to the boat, I’d need a trailer, a new car to pull the boat and driveway upgrades for parking.  Looking at that, the boat’s the easiest part of the story.

The smart move for me is to rent a boat and trailer for a few months to test my kids interest.  In that case, I’m going to be towing the boat using my Volvo instead of going “all in” and buying that new Ferd 15000 (you know you want it).  As a compromise, I’ll install a hitch in my trusty sedan and use it gently to tow the boat.  It’s not ideal and causes extra wear to the transmission but it’s a very low risk way to explore the boat owning life style.

Enterprise IT already has the Volvo (VMware vCenter) and likely sees calls for OpenStack as the illusion of cool ski parties without regard for the realities of owning the boat.  Pulling the boat for a while (using OpenStack on VMware) makes a lot of sense to these users.  If the boat gets used then they will buy the truck and accessories (move off VMware).  Until then, their still learning about the open source boating life style.

Putting open source concerns aside.  This helps VMware lead the OpenStack play for enterprises but may ultimately backfire if they have not setup their long game to keep the customers.