Jazz vs. Symphony: Why micromanaging digital work FAILS. [post 3 of 8]

Third IN AN 8 POST SERIES, BRAD SZOLLOSE AND ROB HIRSCHFELD INVITE YOU TO SHARE IN OUR DISCUSSION ABOUT FAILURES, FIGHTS AND FRIGHTENING TRANSFORMATIONS GOING ON AROUND US AS DIGITAL WORK CHANGES WORKPLACE DELIVERABLES, PLANNING AND CULTURE.

Now that we’ve introduced music as a functional analogy for a stable 21st century leadership model and defined digital work, we’re ready to expose how work actually gets done in the information age.

First, has work really changed?  Yes.  Traditionally there was a distinct difference between organized production and service-based/creative work such as advertising, accounting or medicine.  Solve a problem by looking for clues and coming up with creative solutions to solve it.

Jazz Hands By RevolvingRevolver on DeviantArt http://revolvingrevolver.deviantart.com/

Digital work on the other hand, and more importantly – digital workers, live in a strange limbo of doing creative work but needing business structures and management models that were developed during the industrial age.

In today’s multi-generational workforce, what appears to be a generational divide has transformed into a non-age-specific cultural rift. As Brad and Rob compared notes, we came to believe that what is really happening is a learned difference in the approach to work and work culture.

There is learned difference in the approach to work and work culture that’s more obvious in, but not limited to, digital natives.

In most companies, the executives are traditionalists (Baby Boomers or hand-selected by Boomers).  While previous generations have been trained to follow hierarchy, the new culture values performance, flexibility and teamwork with a less top-down control oriented outlook.

It’s like a symphonic conductor who is used to picking the chair order and directing the tempo is handing out sheet music to a Jazz ensemble.  So how is the traditional manager going to deliver a stellar performance when his performers are Jazz trained?

In traditional concert orchestra, each musician has to go to college, train hard, earn a shot to get into the orchestra, and overtime, work very hard to earn the First Chair position (think earning the corner office).  Once in that position, they stay there until death or retirement.  Anyone who deviates, is fired. Improv is only allowed during certain songs, by a select few.  It’s the workplace equivalent to climbing the corporate ladder.

Most digital workers think they belong to a Jazz ensemble.  

It’s a mistake to believe less organized means less skilled.  Workers in the Jazz model are also talented and trained professionals.  If you look at the careers of Thelonius Monk, Duke Ellington and Dizzy Gillespie, they all had formal training, many started as children.  The same is true for digital workers: many started build job skills as children and then honed their teamwork playing video games.

But can a loosely organized group consistently deliver results? Yes. In fact, they deliver better results!

When a Jazz Improv group plays, they have a rough composition to start with. Each member is given time for a solo.  To the uninitiated there appears to be no leaders in this milieu of talent, but the leader is there.  They just refuse to control the performance; instead, they trust that each member will bring their A Game and perform at 100% of their capacity.

In business, this is scary. Don’t we need someone to check each person’s work? People are just messing around right? I mean, is this actual work? Who is in charge?

In businesss environments that operate more like Jazz, studies have proven that there is a 32% increase in productivity from traditional command and control environments driven by hierarchy.

Age, experience and position are NOT the criteria for the Digital Worker. Output is.  And output is different for each product. Management’s role in this model is to get out of the way and let the musicians create. Instead of conforming to a single style and method, the people producing in the model each bring something unique and also experience a high degree of ownership.

This is a powerful type of workplace diversity: by allowing different ways of problem solving to co-exist, we also make the workplace more inclusive and collaborative.

Sound too good to be true?  In our next post we’ll discuss trust as the critical ingredient for Jazz performance.  (Teaser)

What Is digital “work?” Can we sell a cloud of smoke? Yes and the impacts are very tangible. [2 of 8]

IN THIS second in an 8 POST SERIES, BRAD SZOLLOSE AND ROB HIRSCHFELD INVITE YOU TO SHARE IN OUR DISCUSSION ABOUT FAILURES, FIGHTS AND FRIGHTENING TRANSFORMATIONS GOING ON AROUND US AS DIGITAL WORK CHANGES WORKPLACE DELIVERABLES, PLANNING AND CULTURE.

So, what is a Digital worker?  Before we talk about managing them, we need to agree to the very concept of digital work.

A Digital Worker is someone who creates value primarily by creating virtual goods and services.  This creates a challenge for traditional work because, in the physical world, no material goods were created.

ManagersBack in The Day, this type of work was equivalent to selling day dreams – it had no material value. It was intangible.

To today’s tech savvy workforce, even though their output exists simply as numbers in the “cloud,” digital work is tangible to digital natives.

Tangible work is directly consumable. If I create something I can see it, hold it in my hands. Eat it and enjoy it in the three dimensional meatverse we call “reality.” So, If I baked a pie and Brad ate it then I produced consumable work. That same rule applies for digital work like this blog post that Brad and I produced and you are reading. It’s nothing more than photons on a screen, but the value is immense and you can see the tangible results of our work.

The entire industrial age up until now was driven by a basic premise of effort equals results so eloquently stated by management consultant, Peter Drucker“If you can’t measure it, you can’t manage it.”

But much of what we do in the nascent stage of Digital Age, the beginning of the 21st Century, can NOT be measured using traditional value placements.

Case in point, what happens if we only worked when our spouses told us it was time to stop playing Candy Crush and get back to writing? We’re still producing digital work but now our spouses have taken on the role of managers. While they played an essential part in the content being created, their input is intangible and something that cannot be measured. Our spouse becomes the influencer in this model.

We need to revisit “If You Can’t Measure It You Can’t Manage.”  It is BS! no longer applies in digital work.

This distinction is important because we want to distinguish between digital workers and managers. They do very similar actions (type on keyboards, send email, go to meetings), but one creates digital goods while the other coordinates the creation of digital goods.

In the world of physical goods, the people coordinating HOW the work gets done have a significant amount of power. They provide the raw materials, tools, capital, supply chains and other requirements to get the goods to market. i.e…logistics. The actions of any single worker cannot scale in a meaningful way without management being involved; consequently, management has a tremendous amount of power (and corresponding respect) in the worker-manager relationship paradigm. This is not just for industrial work, the same applies to farming, singing, writing or other industries and defines most work in the pre-digital world of the Boomers, Traditionalists and earlier generations.

But let’s extend our simple example to a team of animators creating special effects for a movie. Pixar for example. The work requires each member of the team to already be up to speed on their specific role in the animation process. Whether a sculptor, character development, a digital set designer or character animator, each member knows what they need to do to be their very best, and how to reach their own deadlines. They are self managed and the very best at their jobs. And each is in charge of creating from the digital universe the same logistics mentioned above. Instead of management providing that support, the digital worker is their own support within the team.

The digital world inverts the traditional worker-manager dynamic.

With digital goods, the raw materials, tools, capital, supply chains and other requirements to get the goods to market. i.e…again, logistics are readily available so the worker’s creativity and effort become the critical resource.

A so called “manager” in this framework has one job: to provide the support and right environment to get the work done. Like a beekeeper, he must trust that each bee knows how to create honey. His or her job is to make their jobs friction free by making the environment the very best to get that work done. Trust is the key word.

bunny slippersThere is still need for management and coordination, but the power dynamic has been radically altered. While anyone could follow Rob’s pie recipe, you cannot simply replace his role as co-author on this blog post. Even more radical, there’s often no perceived need for managers at all!  Digital workers simply order pizza and produce digital goods in their bathrobe and bunny slippers.

While this vision is held as a core belief by many digital natives, we don’t believe it entirely.

But wait Rob and Brad, what about those YouTube millionaires who upload cat videos and cash in?  In those cases, there is a lot of invisible coordination in the distribution channel. The massive infrastructure needed to deliver Grumpy Cat is also digital work and Google invests vast sums of money to reduce the friction connecting those content creators and consumers. [Google and YouTube are the beekeeper.]

We believe the need for coordination of digital work is a critical and necessary component for real digital work to get done on time. Unfortunately, the inversion of power means that managers have neither the authority not resource controls that were in place when “modern management techniques” were created.

Our focus here is not on the lone wolf digital workers, but instead, we are focused on the collaborative digital worker. Those people who must collaborate with each other to deliver their goods. For those workers, there is a need for capital, supply chains and coordination. Their work is just a bit of the larger digital whole.

If “modern management” does not work for digital workers then what does?

Let’s keep in mind as we explore this discussion that these are High Trust environments and the subject for our next 6 posts.  Read post 3.

Can Digital Workers Deliver? No. [cloud culture vs. traditional management, post 1 of 8]

In this 8 post series, Brad Szollose and Rob hirschfeld invite you to share in our discussion about failures, fights and frightening transformations going on around us as digital work changes workplace deliverables, planning and culture.

On the shouldersDigital workers will not deliver. Not if you force them into the 20th century management model then they (and you) will fail miserably; however, we believe they can outperform previous generations if guided correctly. In the 21st Century, digital technologies have fundamentally transformed both the way we work and, more importantly, how we have learned to work.

So far, we’ve framed this transformation as a generational (Boomers vs Millennials) challenge; however, workers today transcend those boundaries. We believe that we need to redefine the debate from cultural viewpoints of Boomers (authority driven leadership) and Millennials (action driven leadership). In the global, digital workforce, these perspectives transcend age.

We looked to performing music as a functional analogy for leadership.

In music, we saw very different leadership cultures at work in symphonic and jazz performances. The symphony orchestra mirrors the Boomer culture expectation of clear leadership hierarchy and top-down directed effort. The jazz band typifies the Millennial cultural norms of fluid leadership based on technical competence where the direction is a general theme and the players evolve the details. Both require technical acumen and have very clear rules for interaction with the art form. More importantly, these two extremes both produce wonderful music, but they are miles apart in execution.

Today’s workforce generations often appear the same way – unable to execute together. We believe strongly that, like symphonies and jazz concerts, both approaches have strengths and weaknesses. The challenge is to understand adapt your leadership cultural language of your performers.

That is what Brad and Rob have been discussing together for years and, now, we’d like to include you in our conversation about how Cloud Culture is transforming our work force.

Read Post #2!

All About That Loop. Lessons from the OpenStack Product Mid-Cycle

OpenStack loves to track developer counts and committers, but velocity without A Feedback loop to set direction is unlikely to get us anywhere sustainable.

LoopLast week, I attended the first day of the OpenStack Product Working Group meeting.   My modest expectations (I just wanted them talking) were far exceeded.  The group managed to cover both strategic and tactical items including drafting a charter and discussing pending changes to the incubation process.

OpenStack needs a strong feedback loop from users and operators back to developers and vendors – statement made during the PM meeting.

The most critical wins from last week what the desire for the PM group to work more closely with the OpenStack technical leadership.  I’m excited to see the community continue to expand the scope of collaboration.

Why is this important?  Because developers and product managers need mutual respect to be effective.

The members of the Product team are leaders within their own organization responsible for talking to users and operators.  We rely on them to close the communication loop by both collecting feedback and explaining direction.  To accomplish this difficult job, the Product team must own articulating a vision for the future.

For OpenStack to succeed, we need to be listening intently to feedback about both how we are doing and if we are headed in the right direction.  Both are required to create a feedback loop.

After seeing this group in action, I’m excited to see what’s next.

Want to read more?

Get involved!  Join the discussion on the OpenStack Product mailing list!

OpenStack PSA: Individual members we need more help – Please Vote!

1/17 Update: We did it!  We reached quorum and approved all the changes!  Also, I am honored to have been re-elected to the Board.  Thank you for the support.

I saw the latest report and we’ve still got a LONG WAY TO GO to get to the quorum that we need.  Don’t let your co-worker or co-contributor be the one missing vote!

Note: If you thing you should have gotten a ballot email but did not.  Contact the OpenStack Election Secretary for assistance.  OpenStack voting is via YOUR PERSONALIZED EMAIL only – you cannot use someone else’s ballot.

Here’s the official request that we’ve been forwarding in the community

OpenStack Individual Members we need your help – Please Vote!

Untitled drawingIncluded on the upcoming individual elections ballot is set of proposed bylaw changes [note: I am also seeking re-election]. To be enacted, these changes require approval by the individual members. At least 25% of the Individual Members must participate in this election in order for the vote to take effect which is why we are reaching out to you. The election will start Monday January 12, 2015 and run thru Friday January 16, 2015.

The unprecedented growth, community size and active nature of the OpenStack community have precipitated the need for OpenStack Bylaw updates. The updates will enable our community to adapt to our continued rapid growth, change and diversity, while reflecting our success and market leadership. Although the proposed changes only effect a small set of verbiage in the bylaws, the changes eliminate some of the hard coded values and naive initial assumptions that found their way into the bylaws when they were initially created in 2013. Those initial assumptions did not anticipate that by 2015 we would have such a large, active community of over 17,000 individual members, over 430 corporate members, and a large diverse set of OpenStack based products and services.

Through many months of community iterative discussion and debate, the DefCore team and board have unanimously accepted a set of changes that are now placed before you for your approval. The changes replace the original hard coded “core” definition with a process for determining the software elements required for use of the OpenStack commercial trademark. Processes which will also account for future revisions and determinations for Core and Trademark Policy.

Note: Another change sets the quorum level at a more reasonable 10%, so these PSAs should not be required in the future.

Complete details on the proposed changes are located at:
https://wiki.openstack.org/wiki/Governance/Foundation/2014ProposedBylawsAmendment

Complete details on the 2015 Board Election are located at:
http://www.openstack.org/election/2015-individual-director-election/

Research showing that Short Lived Servers (“mayflies”) create efficiency at scale [DATA REQUESTED]

Last summer, Josh McKenty and I extended the puppies and cattle metaphor to limited life cattle we called “mayflies.” It was an attempt to help drive the cattle mindset (I think of it as social engineering, or maybe PsychOps) by forcing churn. I’ve come to think of it a step in between cattle and chaos monkeys (see Adrian Cockcroft).

While our thoughts were on mainly ops patterns, I’ve heard that there could be a real operational benefit from encouraging this behavior. The increased turn over in the environment improves scheduler optimization, planned load drains and coping with platform/environment migration.

Now we have a chance to quantify this benefit: a college student (disclosure: he’s my son) has created a data center emulation to see if Mayflies help with utilization. His model appears to work.

Now, he needs some real world data, here’s his request for assistance [note: he needs data by 1/20 to be included in this term]:

Hello!

I am Alexander Hirschfeld, a freshman at Rose-Hulman Institute of Technology. I am working on an independent study about Mayflies, a new idea in virtual machine management in cloud computing. Part of this management is load balancing and resource allocation for virtual machines across a collection of servers. The emulation that I am working on needs a realistic set of data to be the most accurate when modeling the results of using the methods outlined by the theory of mayflies.

Mayflies are an extension of the puppies verses cattle approach to machines, they are the extreme version of cattle as they have a known limited lifespan, such as 7 days. This requires the users of the cloud to build inherently more automated and fault-resistant applications. If you could send me a collection of the requests for new virtual machines(per standard unit of time and their requested specs/size), as well as an average lifetime for the virtual machines (or a graph or list of designated/estimated life times), and a basic summary of the collection of servers running the virtual machines(number, ram, cores), I would be better able to understand how Mayflies can affect a cloud.

Thanks,
Alexander Hirschfeld, twitter: @d-qoi

Needless to say, I’m really excited about the progress on demonstrating some the impact of this practice and am looking forward to posting about his results in the near future.

If you post in the comments, I will make sure you are connected to Alex.

Delicious 7 Layer DIP (DevOps Infrastructure Provisioning) model with graphic!

Applying architecture and computer science principles to infrastructure automation helps us build better controls.  In this post, we create an OSI-like model that helps decompose the ops environment.

The RackN team discussions about “what is Ready State” have led to some interesting realizations about physical ops.  One of the most critical has been splitting the operational configuration (DNS, NTP, SSH Keys, Monitoring, Security, etc) from the application configuration.

Interactions between these layers is much more dynamic than developers and operators expect.  

In cloud deployments, you can use ask for the virtual infrastructure to be configured in advance via the IaaS and/or golden base images.  In hardware, the environment build up needs to be more incremental because that variations in physical infrastructure and operations have to be accommodated.

Greg Althaus, Crowbar co-founder, and I put together this 7 layer model (it started as 3 and grew) because we needed to be more specific in discussion about provisioning and upgrade activity.  The system view helps explain how layer 5 and 6 operate at the system layer.

7 Layer DIP

The Seven Layers of our DIP:

  1. shared infrastructure – the base layer is about the interconnects between the nodes.  In this model, we care about the specific linkage to the node: VLAN tags on the switch port, which switch is connected, which PDU ID controls turns it on.
  2. firmware and management – nodes have substantial driver (RAID/BIOS/IPMI) software below the operating system that must be configured correctly.   In some cases, these configurations have external interfaces (BMC) that require out-of-band access while others can only be configured in pre-install environments (I call that side-band).
  3. operating system – while the operating system is critical, operators are striving to keep this layer as thin to avoid overhead.  Even so, there are critical security, networking and device mapping functions that must be configured.  Critical local resource management items like mapping media or building network teams and bridges are level 2 functions.
  4. operations clients – this layer connects the node to the logical data center infrastructure is basic ways like time synch (NTP) and name resolution (DNS).  It’s also where more sophisticated operators configure things like distributed cache, centralized logging and system health monitoring.  CMDB agents like Chef, Puppet or Saltstack are installed at the “top” of this layer to complete ready state.
  5. applications – once all the baseline is setup, this is the unique workload.  It can range from platforms for other applications (like OpenStack or Kubernetes) or the software itself like Ceph, Hadoop or anything.
  6. operations management – the external system references for layer 3 must be factored into the operations model because they often require synchronized configuration.  For example, registering a server name and IP addresses in a DNS, updating an inventory database or adding it’s thresholds to a monitoring infrastructure.  For scale and security, it is critical to keep the node configuration (layer 3) constantly synchronized with the central management systems.
  7. cluster coordination – no application stands alone; consequently, actions from layer 4 nodes must be coordinated with other nodes.  This ranges from database registration and load balancing to complex upgrades with live data migration. Working in layer 4 without layer 6 coordination creates unmanageable infrastructure.

This seven layer operations model helps us discuss which actions are required when provisioning a scale infrastructure.  In my experience, many developers want to work exclusively in layer 4 and overlook the need to have a consistent and managed infrastructure in all the other layers.  We enable this thinking in cloud and platform as a service (PaaS) and that helps improve developer productivity.

We cannot overlook the other layers in physical ops; however, working to ready state helps us create more cloud-like boundaries.  Those boundaries are a natural segue my upcoming post about functional operations (older efforts here).

OpenStack DefCore Enters Execution Phase. Help!

OpenStack DefCore Committee has established the principles and first artifacts required for vendors using the OpenStack trademark.  Over the next release cycle, we will be applying these to the Ice House and Juno releases.

Like a rockLearn more?  Hear about it LIVE!  Rob will be doing two sessions about DefCore next week (will be recorded):

  1. Tues Dec 16 at 9:45 am PST- OpenStack Podcast #14 with Jeff Dickey
  2. Thurs Dec 18 at 9:00 am PST – Online Meetup about DefCore with Rafael Knuth (optional RSVP)

At the December 2014 OpenStack Board meeting, we completed laying the foundations for the DefCore process that we started April 2013 in Portland. These are a set of principles explaining how OpenStack will select capabilities and code required for vendors using the name OpenStack. We also published the application of these governance principles for the Havana release.

  1. The OpenStack Board approved DefCore principles to explain
    the landscape of core including test driven capabilities and designated code (approved Nov 2013)
  2. the twelve criteria used to select capabilities (approved April 2014)
  3. the creation of component and framework layers for core (approved Oct 2014)
  4. the ten principles used to select designated sections (approved Dec 2014)

To test these principles, we’ve applied them to Havana and expressed the results in JSON format: Havana Capabilities and Havana Designated Sections. We’ve attempted to keep the process transparent and community focused by keeping these files as text and using the standard OpenStack review process.

DefCore’s work is not done and we need your help!  What’s next?

  1. Vote about bylaws changes to fully enable DefCore (change from projects defining core to capabilities)
  2. Work out going forward process for updating capabilities and sections for each release (once authorized by the bylaws, must be approved by Board and TC)
  3. Bring Havana work forward to Ice House and Juno.
  4. Help drive Refstack process to collect data from the field

Self-Exposure: Hidden Influencers become OpenStack Product Working Group

Warning to OpenStack PMs: If you are not actively involved in this effort then you (and your teams) will be left behind!

ManagersThe Hidden Influencers (now called “OpenStack Product Working Group”) had a GREAT and PRODUCTIVE session at the OpenStack (full notes):

  1. Named the group!  OpenStack Product Working Group (now, that’s clarity in marketing) [note: I was incorrect saying “Product Managers” earlier].
  2. Agreed to use the mailing list for communication.
  3. Committed to a face-to-face mid-cycle meetup (likely in South Bay)
  4. Output from the meetup will be STRATEGIC DIRECTION doc to board (similar but broader than “Win the Enterprise”)
  5. Regular meeting schedule – like developers but likely voice interactive instead of IRC.  Stefano Maffulli is leading.

PMs starting this group already direct the work for a super majority (>66%) of active contributors.

The primary mission for the group is to collaborate and communicate around development priorities so that we can ensure that project commitments get met.

It was recognized that the project technical leads are already strapped coordinating release and technical objectives.  Further, the product managers are already but independently engaged in setting strategic direction, we cannot rely on existing OpenStack technical leadership to have the bandwidth.

This effort will succeed to the extent that we can help the broader community tied in and focus development effort back to dollars for the people paying for those developers.  In my book, that’s what product managers are supposed to do.  Hopefully, getting this group organized will help surface that discussion.

This is a big challenge considering that these product managers have to balance corporate, shared project and individual developers’ requirements.  Overall, I think Allison Randall summarized our objectives best: “we’re herding cats in the same direction.”

Leveling OpenStack’s Big Tent: is OpenStack a product, platform or suite?

Question of the day: What should OpenStack do with all those eager contributors?  Does that mean expanding features or focusing on a core?

IMG_20141108_101906In the last few months, the OpenStack technical leadership (Sean Dague, Monty Taylor) has been introducing two interconnected concepts: big tent and levels.

  • Big tent means expanding the number of projects to accommodate more diversity (both in breath and depth) in the official OpenStack universe.  This change accommodates the growth of the community.
  • Levels is a structured approach to limiting integration dependencies between projects.  Some OpenStack components are highly interdependent and foundational (Keystone, Nova, Glance, Cinder) while others are primarily consumers (Horizon, Saraha) of lower level projects.

These two concepts are connected because we must address integration challenges that make it increasingly difficult to make changes within the code base.  If we substantially expand the code base with big tent then we need to make matching changes to streamline integration efforts.  The levels proposal reflects a narrower scope at the base than we currently use.

By combining big tent and levels, we are simultaneously growing and shrinking: we grow the community and scope while we shrink the integration points.  This balance may be essential to accommodate OpenStack’s growth.

UNIVERSALLY, the business OpenStack community who wants OpenStack to be a product.  Yet, what’s included in that product is unclear.

Expanding OpenStack projects tends to turn us into a suite of loosely connected functions rather than a single integrated platform with an ecosystem.  Either approach is viable, but it’s not possible to be both simultaneously.

On a cautionary note, there’s an anti-Big Tent position I heard expressed at the Paris Conference.  It’s goes like this: until vendors start generating revenue from the foundation components to pay for developer salaries; expanding the scope of OpenStack is uninteresting.

Recent DefCore changes also reflect the Big Tent thinking by adding component and platform levels.  This change was an important and critical compromise to match real-world use patterns by companies like Swiftstack (Object), DreamHost (Compute+Ceph), Piston (Compute+Ceph) and others; however, it creates the need to explain “which OpenStack” these companies are using.

I believe we have addressed interoperability in this change.  It remains to be seen if OpenStack vendors will choose to offer the broader platform or limit to themselves to individual components.  If vendors chase the components over platform then OpenStack becomes a suite of loosely connect products.  It’s ultimately a customer and market decision.

It’s not too late to influence these discussions!  I’m very interested in hearing from people in the community which direction they think the project should go.