Cloud Culture: Online Games, the real job training for Digital Natives [Collaborative Series 5/8]

Translation: Why do Digital Natives value collaboration over authority?

Kids Today

This post is #5 in an collaborative eight part series by Brad Szollose and I about how culture shapes technology.

Before we start, we already know that some of you are cynical about what we are suggesting—Video games? Are you serious? But we’re not talking about Ms. Pac-Man. We are talking about deeply complex, rich storytelling, and task-driven games that rely on multiple missions, worldwide player communities, working together on a singular mission.

Leaders in the Cloud Generation not just know this environment, they excel in it.

The next generation of technology decision makers is made up of self-selected masters of the games. They enjoy the flow of learning and solving problems; however, they don’t expect to solve them alone or a single way. Today’s games are not about getting blocks to fall into lines; they are complex and nuanced. Winning is not about reflexes and reaction times; winning is about being adaptive and resourceful.

In these environments, it can look like chaos. Digital workspaces and processes are not random; they are leveraging new-generation skills. In the book Different, Youngme Moon explains how innovations looks crazy when they are first revealed. How is the work getting done? What is the goal here? These are called “results only work environments,” and studies have shown they increase productivity significantly.

Digital Natives reject top-down hierarchy.

These college educated self-starters are not rebels; they just understand that success is about process and dealing with complexity. They don’t need someone to spoon feed them instructions.

Studies at MIT and The London School of Economics have revealed that when high-end results are needed, giving people self-direction, the ability to master complex tasks, and the ability to serve a larger mission outside of themselves will garnish groundbreaking results.

Gaming does not create mind-addled Mountain Dew-addicted unhygienic drone workers. Digital Natives raised on video games are smart, computer savvy, educated, and, believe it or not, resourceful independent thinkers.

Thomas Edison said:

“I didn’t fail 3,000 times. I found 3,000 ways how not to create a light bulb.”

Being comfortable with making mistakes thousands of times ’til mastery sounds counter-intuitive until you realize that is how some of the greatest breakthroughs in science and physics were discovered.  Thomas Edison made 3,000 failed iterations in creating the light bulb.

Level up: You win the game by failing successfully.

Translation: Learn by playing, fail fast, and embrace risk.

Digital Natives have been trained to learn the rules of the game by just leaping in and trying. They seek out mentors, learn the politics at each level, and fail as many times as possible in order to learn how NOT to do something. Think about it this way: You gain more experience when you try and fail quickly then carefully planning every step of your journey. As long as you are willing to make adjustments to your plans, experience always trumps prediction.Just like in life and business, games no longer come with an instruction manual.

In Wii Sports, users learn the basic in-game and figure out the subtlety of the game as they level up. Tom Bissel, in Extra Lives: Why Video Games Matter, explains that the in-game learning model is core to the evolution of video games. Game design involves interactive learning through the game experience; consequently, we’ve trained Digital Natives that success comes from overcoming failure.

4 item OSCON report: no buzz winner, OpenStack is DownStack?, Free vs Open & the upstream imperative

Now that my PDX Trimet pass expired, it’s time to reflect on OSCON 2014.   Unfortunately, I did not ride my unicorn home on a rainbow; this year’s event seemed to be about raising red flags.

My four key observations:

  1. No superstar. Past OSCONs had at least one buzzy community super star.  2013 was Docker and 2011 was OpenStack.  This was not just my hallway track perception, I asked around about this specifically.  There was no buzz winner in 2014.
  2. People were down on OpenStack (“DownStack”). Yes, we did have a dedicated “Open Cloud Day” event but there was something missing.  OpenSack did not sponsor and there were no major parties or releases (compared to previous years) and little OpenStack buzz.  Many people I talked to were worried about the direction of the community, fragmentation of the project and operational readiness.  I would be more concerned about “DownStack” except that no open infrastructure was a superstar either (e.g.: Mesos, Kubernetes and CoreOS).  Perhaps, OSCON is simply not a good venue open infrastructure projects compared to GlueCon or Velocity?  Considering the rapid raise of container-friendly OpenStack alternatives; I think the answer may be that the battle lines for open infrastructure are being redrawn.
  3. Free vs. Open. Perhaps my perspective is more nuanced now (many in open source communities don’t distinguish between Free and Open source) but there’s a real tension between Free (do what you want) and Open (shared but governed) source.  Now that open source is a commercial darling, there is a lot of grumbling in the Free community about corporate influence and heavy handedness.   I suspect this will get louder as companies try to find ways to maintain control of their projects.
  4. Corporate upstreaming becomes Imperative. There’s an accelerating upstreaming trend for companies that write lots of code to support their primary business (Google is a primary example) to ensure that code becomes used outside their company.   They open their code and start efforts to ensure its adoption.  This requires a dedicated post to really explain.

There’s a clear theme here: Open source is going mainstream corporate.

We’ve been building amazing software in the open that create real value for companies.  Much of that value has been created organically by well-intentioned individuals; unfortunately, that model will not scale with the arrival for corporate interests.

Open source is thriving not dying: these companies value the transparency, collaboration and innovation of open development.  Instead, open source is transforming to fit within corporate investment and governance needs.  It’s our job to help with that metamorphosis.

All I Ever Wanted Is A Composable RunDeck

Rob H:

I love the “just enough orchestration” description of OpenCrowbar. It’s Goldilocks orchestration!

Originally posted on New Goliath:

I love RunDeck. It’s just enough orchestration to get many jobs done in such an easy way. I can take whatever’s working for me, wrap it up in RunDeck and give it to someone. I wrote a long writeup on hooking it to your Active Directory, through OpenLDAP. I’m old school.  But all I’ve ever wanted is a composable RunDeck.  To bring inter-tool composition right into my face.

So then I fell in love with Chef and Puppet as they grabbed the very necessary spotlight. Suddenly, arising from the past of CFEngine, I could now code my infrastructure! Different parts of the system worked predictably, across platforms, and with such clarity and simplicity. Chef saves me thousands of lines of code, and helps me reason about my systems on a whole other level.

But these two worlds never really fit together very well. I want my “just enough orchestration,”…

View original 137 more words

Just for fun, putting themes to OpenStack Conferences

I’ve been to every OpenStack summit and, in retrospect, each one has a different theme.  I see these as community themes beyond the releases train that cover how the OpenStack ecosystem has changed.

The themes are, of course, highly subjective and intented to spark reflection and discussion.

City Release Theme My Commentary
ATL Ice House Its my sandbox! The new marketplace is great and there are also a lot of vendors who want to differentiate their offering and are not sure where to play.
HK Havana Project land grab It felt like a PTL gold rush as lots of new projects where tossed into the ecosystem mix.  I’m wary of perceived “anointed” projects that define “the way” to do things.
PDX Grizzly Shiny new things We went from having a defined core set of projects to a much richer and varied platforms, environments and solutions.
SD Folsom Breaking up is hard to do Nova began to fragment (cinder & quantum neutron)
SF Essex New kids are here Move over Rackspace.  Lots of new operating systems, providers, consulting and hosting companies participating.  Stackalytics makes it into a real commit race.
BOS Diablo Race to be the first Everyone was trying to show that OpenStack could be used for real work.  Lots of startups launched.
SJC Cactus Oh, you like us! We need some process This is real so everyone was exporing OpenStack.  We clearly needed to figure out how to work together.  This is where we migrated to git.
SA Bexar We’re going to take over the world We handed out rose-colored classes that mostly turned out pretty accurate; however. many some top names from that time are not in the community now (Citrix, NASA, Accenture, and others).
ATX Austin We choose “none of the above” There was a building sense of potential energy while companies figured out that 1) there was a gap and 2) they wanted to fill it together.

Parable of Kitten Taming

It’s time to return to story of Barney and Bailum.  Last year, I wrote about their separate paths through the circus business: Bailum succeeding with a lean model and Barney failing with a “go big” strategy.  This parable opens with Bailum taking pity on Barney and bringing him into her thriving animal training business.

Bailum had grown her Lion taming business from the ground up.  She started from humble beginnings with untrained dogs; consequently, she’d learned about building rapport and trust with her performers.  She never considered them to be animals.  To her, everyone in her organization (especially the animals) was a valued contributor.  She’d seen first-hand that just one bad link in the chain could cause a great performance team to turn sour.  Her acts won awards and she was proud to have them in the spotlight while she focused on building trust and a sustainable culture.

Unfortunately, Barney did not share his sister’s experience or values.  He only saw the name that she’d built for the company and felt that he could use his position and relationship to promote himself.   Even though he knew nothing of animal training, he was eager to redirect his staff into new areas.  Reading market data and without consulting his trainers, he decided that a cute kitten acts would attract more business than the company’s successful dogs acts.

Overnight, he released the dogs and acquired kittens from a local shelter.  Some of his trainers simply quit while others made an attempt to follow the new direction.  Barney was impatient for success and started watching the trainers learning to work with the frisky felines.  Progress was slow and Barney vented his frustration by yelling at the trainers and ultimately putting shock collars on the kittens.  In short order, the trainers had left and Barney was being sprayed, scratched and bitten by the cats.

When Bailum learned about her brother’s management approach she was mortified; unfortunately, he had also signed contracts promising kitten acts to their customers.   After restructuring her familial entanglements, she took a personal interest in training the kittens.  She immediately recognized that cats require independence instead of direction compared to dogs.  Starting from careful handling, then bringing in her lion tamers and rewarding positive results, she created working troop.  The final results were so effective (and logistics so much easier) that Bailum ultimately transformed her business to focus on them exclusively.

Moral: you can’t force cats to bark but, with the right approach, kittens can outperform lions

OpenStack DefCore Matrix Cheet Sheet

DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating minimum standards for products labeled “OpenStack.”

In the last week, the DefCore committee release the results of 6 months of work.  We choose to getting input in favor of cleanups and polish, so please be patient if some of the data is overwhelming.

We’ve got enough feedback to put together this capabilities matrix cheat sheet to help the interpret all the colors and data on the page (headers link).

capabilities_matrix_explained

Networking in Cloud Environments, SDN, NFV, and why it matters [part 2 of 2]

scott_jensen2Scott Jensen is an Engineering Director and colleague of mine from Dell with deep networking and operations experience.  He had first hand experience deploying OpenStack and Hadoop and has a critical role in defining Dell’s Reference Architectures in those areas.  When I saw this writeup about cloud networking (first post), I asked if it would be OK to post it here and share it with you.

GUEST POST 2 OF 2 BY SCOTT JENSEN:

So what is different about Cloud and how does it impact on the network

In a traditional data center this was not all that difficult (relatively).  You knew what was going to running on what system (physically) and could plan your infrastructure accordingly.  The majority of the traffic moved in a North/South direction. Or basically from outside the infrastructure (the internet for example) to inside and then responded back out.  You knew that if you had to design a communication channel from an application server to a database server this could be isolated from the other traffic as they did not usually reside on the same system.

scj-net1

Virtualization made this more difficult.  In this model you are sharing systems resources for different applications.  From the networks point of view there are a large number of systems available behind a couple of links.  Live Migration puts another wrinkle in the design as you now have to deal with a specific system moving from one physical server to another.  Network Virtualization helps out a lot with this.  With this you can now move virtual ports from one physical server to another to ensure that when one virtual machine moves from a physical server to another that the network is still available.  In many cases you managed these virtual networks the same as you managed your physical network.  As a matter of fact they were designed to emulate the physical as much as possible.  The virtual machines still looked a lot like the physical ones they replaced and can be treated in vary much the same way from a traffic flow perspective.  The traffic still is primarily a North/South pattern.

Cloud, however, is a different ball of wax.  Think about the charistics of the Cattle described above.  A cloud application is smaller and purpose built.  The majority of its traffic is between VMs as different tiers which were traditionally on the same system or in the same VM are now spread across multiple VMs.  Therefore its traffic patterns are primarily East/West.  You cannot forget that there is a North/South pattern the same as what was in the other models which is typically user interaction.  It is stateless so that many copies of itself can run in tandem allowing it to elastically scale up and down based on need and as such they are appearing and disappearing from the network.  As these VMs are spawned on the system they may be right next to each other or on different servers or potentially in different Data Centers.  But it gets even better.   scj-net2

Cloud architectures are typically multi-tenant.  This means that multiple customers will utilize this infrastructure and need to be isolated from each other.  And of course Clouds are self-service.  Users/developers can design, build and deploy whenever they want.  Including designing the network interconnects that their applications need to function.  All of this will cause overlapping IP address domains, multiple virtual networks both L2 and L3, requirements for dynamically configuring QOS, Load Balancers and Firewalls.  Lastly in our list of headaches is not the least.  Cloud systems tend to breed like rabbits or multiply like coat hangers in the closet.  There are more and more systems as 10 servers become 40 which becomes 100 then 1000 and so on.

 

So what is a poor Network Engineer to do?

First get a handle on what this Cloud thing is supposed to be for.  If you are one of the lucky ones who can dictate the use of the infrastructure then rock on!  Unfortunately, that does not seem to be the way it goes for many.  In the case where you just cannot predict how the infrastructure will be used I am reminded of the phrase “there is not replacement for displacement”.  Fast links, non-blocking switches, Network Fabrics are all necessary for the physical network but will not get you there.  Sense as a network administrator you cannot predict the traffic patterns who can?  Well the developer and the application itself.  This is what SDN is all about.  It allows a programmatic interface to what is called an overlay network.  A series of tunnels/flows which can build virtual networks on top of the physical network giving that pesky application what it was looking for.  In some cases you may want to make changes to the physical infrastructure.  For example change the configuration of the Firewall or Load Balancer or other network equipment.  SDN vendors are creating plug-ins that can make those types of configurations.  But if this is not good enough for you there is NFV.  The basic idea here is that why have specialized hardware for your core network infrastructure when we can run them virtualized as well?  Let’s run those in VM’s as well, hook them into the virtual network and SDN to configure them and we now can virtualize the routers, load balancers, firewalls and switches.  These technologies are in very much a state of flux right now but they are promising none the less.  Now if we could just virtualize the monitoring and troubleshooting of these environments I’d be happy.