OpenStack DefCore Process Flow: Community Feedback Cycles for Core [6 points + chart]

If you’ve been following my DefCore posts, then you already know that DefCore is an OpenStack Foundation Board managed process “that sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack™ products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled OpenStack™.”

In this post, I’m going to be very specific about what we think “community resources and involvement” entails.

The draft process flow chart was provided to the Board at our OSCON meeting without additional review.  It below boils down to a few key points:

  1. We are using the documents in the Gerrit review process to ensure that we work within the community processes.
  2. Going forward, we want to rely on the technical leadership to create, cluster and describe capabilities.  DefCore bootstrapped this process for Havana.  Further, Capabilities are defined by tests in Tempest so test coverage gaps (like Keystone v2) translate into Core gaps.
  3. We are investing in data driven and community involved feedback (via Refstack) to engage the largest possible base for core decisions.
  4. There is a “safety valve” for vendors to deal with test scenarios that are difficult to recreate in the field.
  5. The Board is responsible for approving the final artifacts based on the recommendations.  By having a transparent process, community input is expected in advance of that approval.
  6. The process is time sensitive.  There’s a need for the Board to produce Core definition in a timely way after each release and then feed that into the next one.  Ideally, the definitions will be approved at the Board meeting immediately following the release.

DefCore Process Draft

Process shows how the key components: designated sections and capabilities start from the previous release’s version and the DefCore committee manages the update process.  Community input is a vital part of the cycle.  This is especially true for identifying actual use of the capabilities through the Refstack data collection site.

  • Blue is for Board activities
  • Yellow is or user/vendor community activities
  • Green is for technical community activities
  • White is for process artifacts

This process is very much in draft form and any input or discussion is welcome!  I expect DefCore to take up formal review of the process in October.

Cloud Culture: Reality has become a video game [Collaborative Series 3/8]

This post is #3 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

DO VIDEO GAMES REALLY MATTER THAT MUCH TO DIGITAL NATIVES?

Yes. Video games are the formative computer user experience (a.k.a. UX) for nearly everyone born since 1977. Genealogists call these people Gen X, Gen Y, or Millennials, but we use the more general term “Digital Natives” because they were born into a world surrounded by interactive digital technology starting from their toys and learning devices.

Malcolm Gladwell explains, in his book Outliers, that it takes 10,000 hours of practice to develop a core skill. In this case, video games have trained all generations since 1977 in a whole new way of thinking. It’s not worth debating if this is a common and ubiquitous experience; instead, we’re going to discuss the impact of this cultural tsunami.

Before we dive into impacts, it is critical for you to suspend your attitude about video games as a frivolous diversion. Brad explores this topic in Liquid Leadership, and Jane McGonnagle, in Reality is Broken, spends significant time exploring the incredibly valuable real world skills that Digital Natives hone playing games. When they are “gaming,” they are doing things that adults would classify as serious work:

  • Designing buildings and creating machines that work within their environment
  • Hosting communities and enforcing discipline within the group
  • Recruiting talent to collaborate on shared projects
  • Writing programs that improve their productivity
  • Solving challenging mental and physical problems under demanding time pressures
  • Learning to persevere through multiple trials and iterative learning
  • Memorizing complex sequences, facts, resource constraints, and situational rules.

Why focus on video gamers?

Because this series is about doing business with Digital Natives and video games are a core developmental experience.

The impact of Cloud Culture on technology has profound implications and is fertile ground for future collaboration between Rob and Brad.  However, we both felt that the challenge of selling to gamers crystallized the culture clash in a very practical and financially meaningful sense.  Culture can be a “soft” topic, but we’re putting a hard edge on it by bringing it home to business impacts.

Digital Natives play on a global scale and interact with each other in ways that Digital Immigrants cannot imagine. Brad tells it best with this story about his nephew:

Years ago, in a hurry to leave the house, we called out to our video game playing nephew to join us for dinner.

“Sebastian, we’re ready.” I was trying to be as gentle as possible without sounding Draconian. That was the parenting methods of my father’s generation. Structure. Discipline. Hierarchy. Fear. Instead, I wanted to be the Cool Uncle.

“I can’t,” he exclaimed as wooden drum sticks pounded out their high-pitched rhythm on the all too familiar color-coded plastic sensors of a Rock Band drum kit.

“What do you mean you can’t? Just stop the song, save your data, and let’s go.”

“You don’t understand. I’m in the middle of a song.” Tom Sawyer by RUSH to be exact. He was tackling Neil Peart. Not an easy task. I was impressed.

“What do you mean I don’t understand? Shut it off.” By now my impatience was noticeable. Wow, I lasted 10 seconds longer than my father if he had been in this same scenario. Progress I guess.

And then my 17-year-old nephew hit me with some cold hard facts without even knowing it… “You don’t understand… the guitar player is some guy in France, and the bass player is this girl in Japan.”

In my mind the aneurism that was forming just blew… “What did he just say?”

And there it was, sitting in my living room—a citizen of the digital age. He was connected to the world as if this was normal. Trained in virtualization, connected and involved in a world I was not even aware of!

My wife and I just looked at each other. This was the beginning of the work I do today. To get businesses to realize the world of the Digital Worker is a completely different world. This is a generation prepared to work in The Cloud Culture of the future.

A Quote from Liquid Leadership, Page 94, How Technology Influences Behavior…

In an article in the Atlantic magazine, writer Nicholas Carr (author of The Shallows: What the Internet Is Doing to Our Brains) cites sociologist Daniel Bell as claiming the following: “Whenever we begin to use ‘intellectual technologies’ such as computers (or video games)—tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies.

In other words, the technology we use changes our behavior!

There’s another important consideration about gamers and Digital Natives. As we stated in post 1, our focus for this series is not the average gamer; we are seeking the next generation of IT decision makers. Those people will be the true digital enthusiasts who have devoted even more energy to mastering the culture of gaming and understand intuitively how to win in the cloud.

“All your base belongs to us.”

Translation: If you’re not a gamer, can you work with Digital Natives?

Our goal for this series is to provide you with actionable insights that do not require rewriting how you work. We do not expect you to get a World of Warcraft subscription and try to catch up. If you already are one then we’ll help you cope with your Digital Immigrant coworkers.

In the next posts, we will explain four key culture differences between Digital Immigrants and Digital Natives. For each, we explore the basis for this belief and discuss how to facilitate Digital Natives decision-making processes.

Keep Reading! Next post is 4: Authority  (previous is ToC)

 

 

Cloud Culture Series TL;DR? Generation Cloud Cheat sheet [Collaborative Series 2/8]

SUBTITLE: Your series is TOO LONG, I DID NOT READ It!

This post is #2 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Your attention is valuable to us! In this section, you will find the contents of this entire blog series distilled down into a flow chart and one-page table.  Our plan is to release one post each Wednesday at 1 pm ET.

Graphical table of contents

flow chartThe following flow chart is provided for readers who are looking to maximize the efficiency of their reading experience.

If you are unfamiliar with flow charts, simply enter at the top left oval. Diamonds are questions for you to choose between answers on the departing arrows. The curved bottom boxes are posts in the series.

Here’s the complete list: 1: Intro > 2: ToC > 3: Video Reality > 4: Authority > 5: On The Game Training > 6: Win by Failing > 7: Go Digital Native > 8: Three Takeaways

Culture conflict table (the Red versus Blue game map)

Our fundamental challenge is that the cultures of Digital Immigrants and Natives are diametrically opposed.  The Culture Conflict Table, below, maps out the key concepts that we explore in depth during this blog series.

Digital Immigrants (N00Bs) Digital Natives (L33Ts)
Foundation: Each culture has different expectations in partners
  Obey RulesThey want us to prove we are worthy to achieve “trusted advisor” status.

They are seeking partners who fit within their existing business practices.

Test BoundariesThey want us to prove that we are innovative and flexible.

They are seeking partners who bring new ideas that improve their business.

  1. Organizational Hierarchy see No Spacesuits (Post 4)
  Permission DrivenOrganizational Hierarchy is efficient

Feel important talking high in the org

Higher ranks can make commitments

Bosses make decisions (slowly)

Peer-to-Peer DrivenOrganizational Hierarchy is limiting

Feel productive talking lower in the org

Lower ranks are more collaborative

Teams make decisions (quickly)

  1. Communication Patterns see MMOG as Job Training (Post 5)
  Formalized & StructuredWaits for Permission

Bounded & Linear

Requirements Focused

Questions are interruptions

Casual & InterruptingDoes NOT KNOW they need permission

Open Ended

Discovered & Listening

Questions show engagement

  1. Risks and Rewards see Level Up (Post 6)
  Obeys RulesAvoid Risk—mistakes get you fired!

Wait and see

Fear of “looking foolish”

Breaks RulesEmbrace Risk—mistakes speed learning

Iterate to succeed

Risks get you “in the game”

  1. Building your Expertise see Becoming L33T (Post 7)
Knowledge is Concentrated Expertise is hard to get (Diploma)

Keeps secrets (keys to success)

Quantitate—you can measure it

Knowledge is Distributed and SharedExpertise is easy to get (Google)

Likes sharing to earn respect

Qualitative—trusts intuition

Hopefully, this condensed version got you thinking.  In the next post, we start to break this information down.

Keep Reading! Next post is Video Reality

Cloud Culture: New IT leaders are transforming the way we create and purchase technology. [Collaborative Series 1/8]

Subtitle: Why L33Ts don’t buy from N00Bs

Brad Szollose and I want to engage you in a discussion about how culture shapes technology [cross post link].  We connected over Brad’s best-selling book, Liquid Leadership, and we’ve been geeking about cultural impacts in tech since 2011.

Rob Hirschfeld

Rob

Brad

In these 8 posts, we explore what drives the next generation of IT decision makers starting from the framework of Millennials and Boomers.  Recently, we’ve seen that these “age based generations” are artificially limiting; however, they provide a workable context this series that we will revisit in the future.

Here’s the list of posts: 1: Intro > 2: ToC > 3: Video Reality > 4: Authority > 5: On The Game Training > 6: Win by Failing > 7: Go Digital Native > 8: Three Takeaways

Our target is leaders who were raised with computers as Digital Natives. They approach business decisions from a new perspective that has been honed by thousands of hours of interactive games, collaboration with global communities, and intuitive mastery of all things digital.

The members of this “Generation Cloud” are not just more comfortable with technology; they use it differently and interact with each other in highly connected communities. They function easily with minimal supervision, self-organize into diverse teams, dive into new situations, take risks easily, and adapt strategies fluidly. Using cloud technologies and computer games, they have become very effective winners.

In this series, we examine three key aspects of next-generation leaders and offer five points to get to the top of your game. Our goal is to find, nurture, and collaborate with them because they are rewriting the script for success.

We have seen that there is a technology-driven culture change that is reshaping how business is being practiced.  Let’s dig in!

What is Liquid Leadership?

“a fluid style of leadership that continuously sustains the flow of ideas in an organization in order to create opportunities in an ever-shifting marketplace.”

Forever Learning?

In his groundbreaking 1970s book, Future Shock, Alvin Toffler pointed out that in the not too distant future, technology would inundate the human race with all its demands, overwhelming those not prepared for it. He compared this overwhelming feeling to culture shock.

Welcome to the future!

Part of the journey in discussing this topic is to embrace the digital lexicon. To help with translations we are offering numerous subtitles and sidebars. For example, the subtitle “L33Ts don’t buy from N00Bs” translates to “Digital elites don’t buy from technical newcomers.”

Loosen your tie and relax; we’re going to have some fun together.  We’ve got 7 more posts in this cloud culture series: 2: ToC > 3: Video Reality > 4: Authority > 5: On The Game Training > 6: Win by Failing > 7: Go Digital Native > 8: Three Takeaways

We’ve also included more background about the series and authors…

Continue reading

Patchwork Onion delivers stability & innovation: the graphics that explains how we determine OpenStack Core

This post was coauthored by the DefCore chairs, Rob Hirschfeld & Joshua McKenty.

The OpenStack board, through the DefCore committee, has been working to define “core” for commercial users using a combination of minimum required capabilities (APIs) and code (Designated Sections).  These minimums are decided on a per project basis so it can be difficult to visualize the impact on the overall effect on the Integrated Release.

Patchwork OnionWe’ve created the patchwork onion graphic to help illustrate how core relates to the integrated release.  While this graphic is pretty complex, it was important to find a visual way to show how different DefCore identifies distinct subsets of APIs and code from each project.  This graphic tries to show how that some projects have no core APIs and/or code.

For OpenStack to grow, we need to have BOTH stability and innovation.  We need to give clear guidance to the community what is stable foundation and what is exciting sandbox.  Without that guidance, OpenStack is perceived as risky and unstable by users and vendors. The purpose of defining “Core” is to be specific in addressing that need so we can move towards interoperability.

Interoperability enables an ecosystem with multiple commercial vendors which is one of the primary goals of the OpenStack Foundation.

Ecosystem OnionOriginally, we thought OpenStack would have “core” and “non-core” projects and we baked that expectation into the bylaws.  As we’ve progressed, it’s clear that we need a less binary definition.  Projects themselves have a maturity cycle (ecosystem -> incubated -> integrated) and within the project some APIs are robust and stable while others are innovative and fluctuating.

Encouraging this mix of stabilization and innovation has been an important factor in our discussions about DefCore.  Growing the user base requires encouraging stability and growing the developer base requires enabling innovation within the same projects.

The consequence is that we are required to clearly define subsets of capabilities (APIs) and implementation (code) that are required within each project.  Designating 100% of the API or code as Core stifles innovation because stability dictates limiting changes while designating 0% of the code (being API only) lessens the need to upstream.  Core reflects the stability and foundational nature of the code; unfortunately, many people incorrectly equate “being core” with the importance of the code, and politics ensues.

To combat the politics, DefCore has taken a transparent, principles-based approach to selecting core.   You can read about in Rob’s upcoming “Ugly Babies” post (check back on 8/14) .

7 Open Source lessons from your English Composition class

We often act as if coding, and especially open source coding, is a unique activity and that’s hubris.   Most human activities follow common social patterns that should inform how we organize open source projects.  For example, research papers are very social and community connected activities.  Especially when published, written compositions are highly interconnected activities.  Even the most basic writing builds off other people’s work with due credit and tries create something worth being used by later authors.

Here are seven principles to good writing that translate directly to good open source development:

  1. Research before writing – take some time to understand the background and goals of the project otherwise you re-invent or draw bad conclusions.
  2. Give credit where due – your work has more credibility when you acknowledge and cross-reference the work you are building on. It also shows readers that you are not re-inventing.
  3. Follow the top authors – many topics have widely known authors who act as “super nodes” in the relationship graph. Recognizing these people will help guide your work, leads to better research and builds community.
  4. Find proof readers – All writers need someone with perspective to review their work before it’s finished. Since we all need reviewers, we all also need to do reviews.
  5. Rework to get clarity – Simplicity and clarity take extra effort but they pay huge dividends for your audience.
  6. Don’t surprise your reader – Readers expect patterns and are distracted when you don’t follow them.
  7. Socialize your ideas – the purpose of writing/code is to make ideas durable. If it’s worth writing then it’s worth sharing.  Your artifact does not announce itself – you need to invest time in explaining it to people and making it accessible.

Thanks to Sean Roberts (a Hidden Influences collaborator) for his contributions to this post.  At OSCON, Sean Roberts said “companies should count open source as research [and development investment]” and I thought he’s said “…as research [papers].”  The misunderstanding was quickly resolved and we were happy to discover that both interpretations were useful.

Back of the Napkin to Presentation in 30 seconds

I wanted to share a handy new process for creating presentations that I’ve been using lately that involves using cocktail napkins, smart phones and Google presentations.

Here’s the Process:

  1. sketch an idea out with my colleagues on a napkin, whiteboard or notebook during our discussion.
  2. snap a picture and upload it to my Google drive from my phone,
  3. import the picture into my presentation using my phone,
  4. tell my team that I’ve updated the presentation using Slack on my phone.

Clearly, this is not a finished presentation; however, it does serve to quickly capture critical content from a discussion without disrupting the flow of ideas.  It also alerts everyone that we’re adding content and helps frame what that content will be as we polish it.  When we immediately position the napkin into a deck, it creates clear action items and reference points for the team.

While blindingly simple, having a quick feedback loop and visual placeholders translates into improved team communication.

Share the love & vote for OpenStack Paris Summit Sessions (closes Wed 8/6)

 

This is a friendly PSA that OpenStack Paris Summit session community voting ends on Wednesday 8/6.  There are HUNDREDS (I heard >1k) submissions so please set aside some time to review a handful.

Robot VoterMY PLEA TO YOU > There is a tendency for companies to “vote-up” sessions from their own employees.  I understand the need for the practice BUT encourage you to make time to review other sessions too.  Affiliation voting is fine, robot voting is not.

If you are interested topics that I discuss on this blog, here’s a list of sessions I’m involved in:

 

 

DefCore Advances at the Core > My take on the OSCON’14 OpenStack Board Meeting

Last week’s day-long Board Meeting (Jonathan’s summary) focused on three major topics: DefCore, Contribute Licenses (CLA/DCO) and the “Win the Enterprise” initiative. In some ways, these three topics are three views into OpenStack’s top issue: commercial vs. individual interests.

But first, let’s talk about DefCore!

DefCore took a major step with the passing of the advisory Havana Capabilities (the green items are required). That means that vendors in the community now have a Board approved minimum requirements.  These are not enforced for Havana so that the community has time to review and evaluate.

Designated Sections (1)For all that progress, we only have half of the Havana core definition complete. Designated Sections, the other component of Core, will be defined by the DefCore committee for Board approval in September. Originally, we expected the TC to own this part of the process; however, they felt it was related to commercial interested (not technical) and asked for the Board to manage it.

The coming meetings will resolve the “is Swift code required” question and that topic will require a dedicated post.  In many ways, this question has been the challenge for core definition from the start.  If you want to join the discussion, please subscribe to the DefCore list.

The majority of the board meeting was spent discussion other weighty topics that are work a brief review.

Contribution Licenses revolve around developer vs broader community challenge. This issue is surprisingly high stakes for many in the community. I see two primary issues

  1. Tension between corporate (CLA) vs. individual (DCO) control and approval
  2. Concern over barriers to contribution (sadly, there are many but this one is in the board’s controls)

Win the Enterprise was born from product management frustration and a fragmented user base. My read on this topic is that we’re pushing on the donkey. I’m hearing serious rumbling about OpenStack operability, upgrade and scale.  This group is doing a surprisingly good job of documenting these requirements so that we will have an official “we need this” statement. It’s not clear how we are going to turn that statement into either carrots or sticks for the donkey.

Overall, there was a very strong existential theme for OpenStack at this meeting: are we a companies collaborating or individuals contributing?  Clearly, OpenStack is both but the proportions remain unclear.

Answering this question is ultimately at the heart of all three primary topics. I expect DefCore will be on the front line of this discussion over the next few weeks (meeting 1, 2, and 3). Now is the time to get involved if you want to play along.

OpenStack DefCore Review [interview by Jason Baker]

I was interviewed about DefCore by Jason Baker of Red Hat as part of my participation in OSCON Open Cloud Day (speaking Monday 11:30am).  This is just one of fifteen in a series of speaker interviews covering everything from Docker to Girls in Tech.

This interview serves as a good review of DefCore so I’m reposting it here:

Without giving away too much, what are you discussing at OSCON? What drove the need for DefCore?

I’m going to walk through the impact of the OpenStack DefCore process in real terms for users and operators. I’ll talk about how the process works and how we hope it will make OpenStack users’ lives better. Our goal is to take steps towards interoperability between clouds.

DefCore grew out of a need to answer hard and high stakes questions around OpenStack. Questions like “is Swift required?” and “which parts of OpenStack do I have to ship?” have very serious implications for the OpenStack ecosystem.

It was impossible to reach consensus about these questions in regular board meetings so DefCore stepped back to base principles. We’ve been building up a process that helps us make decisions in a transparent way. That’s very important in an open source community because contributors and users want ground rules for engagement.

It seems like there has been a lot of discussion over the OpenStack listservs over what DefCore is and what it isn’t. What’s your definition?

First, DefCore applies only to commercial uses of the OpenStack name. There are different rules for the integrated code base and community activity. That’s the place of most confusion.

Basically, DefCore establishes the required minimum feature set for OpenStack products.

The longer version includes that it’s a board managed process that’s designed to be very transparent and objective. The long-term objective is to ensure that OpenStack clouds are interoperable in a measurable way and that we also encourage our vendor ecosystem to keep participating in upstream development and creation of tests.

A final important component of DefCore is that we are defending the OpenStack brand. While we want a vibrant ecosystem of vendors, we must first have a community that knows what OpenStack is and trusts that companies using our brand comply with a meaningful baseline.

Are there other open source projects out there using “designated sections” of code to define their product, or is this concept unique to OpenStack? What lessons do you think can be learned from other projects’ control (or lack thereof) of what must be included to retain the use of the project’s name?

I’m not aware of other projects using those exact words. We picked up ‘designated sections’ because the community felt that ‘plug-ins’ and ‘modules’ were too limited and generic. I think the term can be confusing, but it was the best we found.

If you consider designated sections to be plug-ins or modules, then there are other projects with similar concepts. Many successful open source projects (Eclipse, Linux, Samba) are functionally frameworks that have very robust extensibility. These projects encourage people to use their code base creatively and then give back some (not all) of their lessons learned in the form of code contributes. If the scope returning value to upstream is too broad then sharing back can become onerous and forking ensues.

All projects must work to find the right balance between collaborative areas (which have community overhead to join) and independent modules (which allow small teams to move quickly). From that perspective, I think the concept is very aligned with good engineering design principles.

The key goal is to help the technical and vendor communities know where it’s safe to offer alternatives and where they are expected to work in the upstream. In my opinion, designated sections foster innovation because they allow people to try new ideas and to target specialized use cases without having to fight about which parts get upstreamed.

What is it like to serve as a community elected OpenStack board member? Are there interests you hope to serve that are difference from the corporate board spots, or is that distinction even noticeable in practice?

It’s been like trying to row a dragon boat down class III rapids. There are a lot of people with oars in the water but we’re neither all rowing together nor able to fight the current. I do think the community members represent different interests than the sponsored seats but I also think the TC/board seats are different too. Each board member brings a distinct perspective based on their experience and interests. While those perspectives are shaped by their employment, I’m very happy to say that I do not see their corporate affiliation as a factor in their actions or decisions. I can think of specific cases where I’ve seen the opposite: board members have acted outside of their affiliation.

When you look back at how OpenStack has grown and developed over the past four years, what has been your biggest surprise?

Honestly, I’m surprised about how many wheels we’ve had to re-invent. I don’t know if it’s cultural or truly a need created by the size and scope of the project, but it seems like we’ve had to (re)create things that we could have leveraged.

What are you most excited about for the “K” release of OpenStack?

The addition of platform services like database as a Service, DNS as a Service, Firewall as a Service. I think these IaaS “adjacent” services are essential to completing the cloud infrastructure story.

Any final thoughts?

In DefCore, we’ve moved slowly and deliberately to ensure people have a chance to participate. We’ve also pushed some problems into the future so that we could resolve the central issues first. We need to community to speak up (either for or against) in order for us to accelerate: silence means we must pause for more input.