SUBTITLE: Your series is TOO LONG, I DID NOT READ It!
This post is #2 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.
Your attention is valuable to us! In this section, you will find the contents of this entire blog series distilled down into a flow chart and one-page table. Our plan is to release one post each Wednesday at 1 pm ET.
Graphical table of contents
The following flow chart is provided for readers who are looking to maximize the efficiency of their reading experience.
If you are unfamiliar with flow charts, simply enter at the top left oval. Diamonds are questions for you to choose between answers on the departing arrows. The curved bottom boxes are posts in the series.
Here’s the complete list: 1: Intro > 2: ToC > 3: Video Reality > 4: Authority > 5: On The Game Training > 6: Win by Failing > 7: Go Digital Native > 8: Three Takeaways
Culture conflict table (the Red versus Blue game map)
Our fundamental challenge is that the cultures of Digital Immigrants and Natives are diametrically opposed. The Culture Conflict Table, below, maps out the key concepts that we explore in depth during this blog series.
||Digital Immigrants (N00Bs)
||Digital Natives (L33Ts)
|Foundation: Each culture has different expectations in partners
||Obey RulesThey want us to prove we are worthy to achieve “trusted advisor” status.
They are seeking partners who fit within their existing business practices.
|Test BoundariesThey want us to prove that we are innovative and flexible.
They are seeking partners who bring new ideas that improve their business.
- Organizational Hierarchy see No Spacesuits (Post 4)
||Permission DrivenOrganizational Hierarchy is efficient
Feel important talking high in the org
Higher ranks can make commitments
Bosses make decisions (slowly)
|Peer-to-Peer DrivenOrganizational Hierarchy is limiting
Feel productive talking lower in the org
Lower ranks are more collaborative
Teams make decisions (quickly)
- Communication Patterns see MMOG as Job Training (Post 5)
||Formalized & StructuredWaits for Permission
Bounded & Linear
Questions are interruptions
|Casual & InterruptingDoes NOT KNOW they need permission
Discovered & Listening
Questions show engagement
- Risks and Rewards see Level Up (Post 6)
||Obeys RulesAvoid Risk—mistakes get you fired!
Wait and see
Fear of “looking foolish”
|Breaks RulesEmbrace Risk—mistakes speed learning
Iterate to succeed
Risks get you “in the game”
- Building your Expertise see Becoming L33T (Post 7)
|Knowledge is Concentrated Expertise is hard to get (Diploma)
Keeps secrets (keys to success)
Quantitate—you can measure it
|Knowledge is Distributed and SharedExpertise is easy to get (Google)
Likes sharing to earn respect
Hopefully, this condensed version got you thinking. In the next post, we start to break this information down.
Keep Reading! Next post is Video Reality
I’ve posted about the early DefCore core capabilities selection process before and we’ve put them into application and discussed them with the community. The feedback was simple: tl;dr. You’ve got the right direction but make it simpler!
So we pulled the 12 criteria into four primary categories:
- Usage: the capability is widely used (Refstack will collect data)
- Direction: the capability advances OpenStack technically
- Community: the capability builds the OpenStack community experience
- System: the capability integrates with other parts of OpenStack
These categories summarize critical values that we want in OpenStack and so make sense to be the primary factors used when we select core capabilities. While we strive to make the DefCore process objective and quantitive, we must recognize that these choices drive community behavior.
With this perspective, let’s review the selection criteria. To make it easier to cross reference, we’ve given each criteria a shortened name:
Shows Proven Usage
- “Widely Deployed” Candidates are widely deployed capabilities. We favor capabilities that are supported by multiple public cloud providers and private cloud products.
- “Used by Tools” Candidates are widely used capabilities:Should be included if supported by common tools (RightScale, Scalr, CloudForms, …)
- “Used by Clients” Candidates are widely used capabilities: Should be included if part of common libraries (Fog, Apache jclouds, etc)
Aligns with Technical Direction
- “Future Direction” Should reflect future technical direction (from the project technical teams and the TC) and help manage deprecated capabilities.
- “Stable” Test is required stable for >2 releases because we don’t want core capabilities that do not have dependable APIs.
- “Complete” Where the code being tested has a designated area of alternate implementation (extension framework) as per the Core Principles, there should be parity in capability tested across extension implementations. This also implies that the capability test is not configuration specific or locked to non-open technology.
Plays Well with Others
- “Discoverable” Capability being tested is Service Discoverable (can be found in Keystone and via service introspection)
- “Doc’d” Should be well documented, particularly the expected behavior. This can be a very subjective measure and we expect to refine this definition over time.
- “Core in Last Release” A test that is a must-pass test should stay a must-pass test. This make makes core capabilities sticky release per release. Leaving Core is disruptive to the ecosystem
Takes a System View
- “Foundation” Test capabilities that are required by other must-pass tests and/or depended on by many other capabilities
- “Atomic” Capabilities is unique and cannot be build out of other must-pass capabilities
- “Proximity” (sometimes called a Test Cluster) selects for Capabilities that are related to Core Capabilities. This helps ensure that related capabilities are managed together.
Note: The 13th “non-admin” criteria has been removed because Admin APIs cannot be used for interoperability and cannot be considered Core.