Why I’m learning open source best practice from Middle School Students

Engineering in open source projects is a different skillset than most of us have ever been trained for; happily, there is a rising cohort of engineers and scientists who have been learning to work in exactly the ways that industry is now demanding.   Here’s the background…

hedge_teamI’ve been helping mentor two FIRST Robotics teams (FLL & FTC) this season and had the privilege to accompany the FLL team (which includes my daughter) to the FIRST World Festival where a global mix of students from 6 to 18 competed, collaborated and celebrated for a wide range of awards and recognition.  The experience is humbling – these students are taking on challenges (for fun) that would scare off most adults.

While I could go on and on about my experience and the FIRST mission, I’d rather share some of what my 12 year old daughter wrote to her coach after the competition:

Thank you Coach for all of the lessons and advice you have shared with me this season. I really appreciate all of the time and effort you have put into making this team the best we could be. You have taught us so much and we will definitely walk away from this season with the new skills and experiences. You were an amazing coach and not only did you help and support us, you also gave us the freedom to be independent so we can learn skills like leadership, time management and meeting with busy schedules. I loved being on this team and I hope this will not be the last of the Hedgehogs.

FIRST designs the program so that these experiences are the norm, not the exception.

Here are some of the critical open source engineering skills I observed first hand at all levels of the competition.

  • Collaboration: at all levels, participants are strongly rewarded for collaborating, mentoring and working together.   Team simply cannot advance without mastering this skill.
  • Consensus: judges actively test and watch for consensus behavior.  This is actively coached and encouraged because the teams quickly learn to appreciate a diversity of strengths.
  • Risk Taking with Delivery: the very nature of competition encourages teams to think big and balance risk with delivery.
  • Celebration: this has to be experienced but the competitions are often compared to rock concerts.  Everyone is involved and every aspect is celebrated.   FIRST is a culture.
  • Situational Judgment: this competition is fast and intense so participants learn to think on their feet.  This type of experience is amazingly valuable and hard to get in class room settings.

In my experience, everyone in open source needs more practice and experience DOING open source work.  I suggest getting involved in these programs as a mentor, judge or volunteer because it’s the most effective hands-on open source training I can imagine.

DevOps Concept: “Ready State” Infrastructure as hand-off milestone

Working for Dell, it’s no surprise that I have a lot of discussions around building up and maintaining the physical infrastructure to run a data centers at scale.  Generally the context is around OpenCrowbar, Hadoop or OpenStack Ironic/TripleO/Heat but the concerns are really universal in my cloud operations experience.

Three Teams

Typically, deployments have three distinct phases: 1) mechanically plug together the systems, 2) get the systems ready to the OS and network level andthen 3) install the application.  Often these phases are so distinct that they are handled by completely different teams!

That’s a problem because errors or unexpected changes from one phase are very expensive to address once you change teams.  The solution has been to become more and more prescriptive about what the system looks like between the second (“ready”) and third (“installed”) phase.  I’ve taken to calling this hand-off a achieving a ready state infrastructure.

I define a “ready state” infrastructure as having been configured so that the application lay down steps are simple and predictable.

In my experience, most application deployment guides start with a ready state assumption.  They read like “Step 0: rack, configure, provision and tweak the nodes and network to have this specific starting configuration.”   If you are really lucky then “specific configuration” is actually a documented and validated reference architecture.

The magic of cloud IaaS is that it always creates ready state infrastructure.  If I request 10 servers with 2 NICs running Ubuntu 14.04 then that’s exactly what I get.  The fact that cloud always provisions a ready state infrastructure has become an essential operating assumption for cloud orchestration and configuration management.

Unfortunately, hardware provisioning is messy.  It takes significant effort to configure a physical system into a ready state.  This is caused by a number of factors

  1. You can’t alter physical infrastructure with programming (an API) – for example, if the server enumerates the NICs differently than you expected, you have to adapt to that.
  2. You have to respect the physical topology of the system – for example, production deployments used teamed NICs that have to be use different switches for redundancy.  You can’t make assumptions, you have to setup the team based on the specific configuration.
  3. You have to build up the configuration in sequence – for example, you can’t setup the RAID configuration after the operating system is installed.  If you made a bad choice then you’ll likely have to repeat the whole sequence of the deployment and some bad choices (like using the wrong subnets) result in a total system rebuild.
  4. Hardware fails and is non-uniform – for example, in any order of sufficient size you will have NIC failures due to everything from simple mechanical card seating issues to BIOS interface mismatches.  Troubleshooting these issues can occupy significant time.
  5. Component configurations are interlocked – for example, a change to the switch settings could result in DHCP failures when systems are rebooted (real experience).  You cannot always work node-to-node, you must deal with the infrastructure as an integrated system.

Being consistent at turning discovered state into ready state is a complex and unique problem space.  As I explore this bare metal provisioning space in the community, I am more and more convinced that it has a distinct architecture from applications built for ready state operations.

My hope in this post is test if the concept of “ready state” infrastructure is helpful in describing the transition point between provisioning and installation.  Please let me know what you think!

OpenStack automated high-availability deploy reality, SUSE shows off chops with Crowbar

While I’ve been focused on delivering next-generation kick-aaS-i-ness with Crowbar v2 (now called OpenCrowbar) and helping the Dell and Red Hat co-engineer a OpenStack Powered Cloud, SUSE has been continuing to expand and polish the OpenStack deployment on Crowbar v1.  I’m always impressed by commit activity (SUSE is the top committer in the Crowbar project) and was excited to see their Havana launch announcement.

Using Crowbar v1, SUSE is delivering a seriously robust automated OpenStack Havana implementation.  They have taken the time to build high availability (HA) across the framework including for Neutron, Heat and Ceilometer.

As an OpenStack Foundation board member, I hear a lot of hand-wringing in the community about ops practices and asking “is OpenStack is ready for the enterprise?”  While I’m not sure how to really define “enterprise,” I do know that SUSE Cloud Havana release version also) shows that it’s possible to deliver a repeatable and robust OpenStack deployment.

This effort shows some serious DevOps automation chops and, since Crowbar is open, everyone in the community can benefit from their tuning.   Of course, I’d love to see these great capabilities migrate into the very active StackForge Chef OpenStack cookbooks that OpenCrowbar is designed to leverage.

Creating HA automation is a great achievement and an important milestone in capturing the true golden fleece – automated release-to-release upgrades.  We built the OpenCrowbar annealer with this objective in mind and I feel like it’s within reach.

Can’t Contain(erize) the Hype – is Docker real or a bubble?

Editorial Note: This was written in April 2014.  Check out how we are using Docker in our latest architectures.

The new application portability darling, Docker, was so popular at this week’s Red Hat Summit that I was expecting Miley Cyrus’ flock of paparazzi to abandon in her favor of Ben Golub.

Personally, I find Docker to be a useful tool and we’ve been embedding it into our dev and test processes in useful ways for DefCore TCUP (at Conference), OpenCrowbar Admin and Dev Nodes.  To me, these are concrete and clear use cases.

There are clearly a lot more great use-cases for Docker, but I can’t help but feel like it’s being thrown into architectural layer cakes and markitectures as a substitute for the non-words “cloud”, “amazing” and “revolutionary.”

How do I distinguish hot from hype?  I look for places where Docker is solving just one problem set instead being a magic wand solution to a raft of systemic issues.

Places where I think Docker is potent and disruptive

  • Creating a portable and consistent environment for dev, test and delivery
  • Helping Linux distros keep updating the kernel without breaking user space (RHEL 7 anyone?)
  • Reducing the virtualization overhead of tenant isolation (containers are lighter)
  • Reducing the virtualization overhead for DevOps developers testing multi-node deployments

But I’m concerned that we’re expecting too many silver bullets

  • Packaging is still tricky:  Creating a locked box helps solve part of downstream problem (you know what you have) but not the upstream problem (you don’t know what you depend on).
  • Container sprawl: Breaking deployments into more functional discrete parts is smart, but that means we have MORE PARTS to manage.   There’s an inflection point between separation of concerns and sprawl.
  • PaaS Adoption: Docker helps with PaaS but it does not solve neither the “you have to model your apps for a PaaS” nor the “PaaS needs scalable data services” problems

Speaking of Miley Cyrus, it’s not the container that matters, but what’s on the inside.  Docker can take a lesson from Miley: attention is great but you’ve still got to be able to sing.    I’m not sure about Miley, but I am digging the tracks that Docker is laying down.  Docker is worth putting on your play list.

Rocking Docker – OpenCrowbar builds solid foundation & life-cycle [VIDEOS]

Docker has been gathering a substantial about of interest as an additional way to solve application portability and dependency hell.  We’ve been enthusiastic participants in this fledgling community (Docker in OpenStack) and my work in DefCore’s Tempest in a Container (TCUP).

flying?  not flying!In OpenCrowbar, we’ve embedded Docker much deeper to solve a few difficult & critical problems: speeding up developing multi-node deployments and building the environment for the containers.  Check out my OpenCrowbar does Docker video or the community demo!

Bootstrapping Docker into a DevOps management framework turns out to be non-trivial because integrating new nodes into a functioning operating environment is very different on Docker than using physical servers or a VMs.  Containers don’t PXE boot and have more limited configuration options.

How did we do this?  Unlike other bare metal provisioning frameworks, we made sure that Crowbar did not require DHCP+PXE as the only node discovery process.  While we default to and fully support PXE with our sledgehammer discovery image, we also allow operators to pre-populate the Crowbar database using our API and make configuration adjustments before the node is discovered/created.

We even went a step farther and enabled the Crowbar dependency graph to take alternate routes (we call it the “provides” role).  This enhancement is essential for dealing with “alike but different” infrastructure like Docker.

The result is that you can request Docker nodes in OpenCrowbar (using the API only for now) and it will automatically create the containers and attach them into Crowbar management.  It’s important to stress that we are not adding existing containers to Crowbar by adding an agent; instead, Crowbar manages the container’s life-cycle and then then work inside the container.

Getting around the PXE cycle using containers as part of Crowbar substantially improves Ops development cycle time because we don’t have to wait for boot > discovery > reboot > install to create a clean environment.  Bringing fresh Docker containers into a dev system takes seconds instead,

The next step is equally powerful: Crowbar should be able to configure the Docker host environment on host nodes (not just the Admin node as we are now demonstrating).  Setting up the host can be very complex: you need to have the correct RAID, BIOS, Operating System and multi-NIC networking configuration.  All of these factors must be done with a system perspective that match your Ops environment.  Luckily, this is exactly Crowbar’s sweet spot!

Until we’ve got that pulled together, OpenCrowbar’s ability to use upstream cookbooks and this latest Dev/Test focused step provides remarkable out of the gate advantages for everyone build multi-node DevOps tools.

Enjoy!

PS: It’s worth noting that we’ve already been using Docker to run & develop the Crowbar Admin server.  This extra steps makes Crowbar even more Dockeriffic.

Running with scissors > DefCore “must-pass” Road Show Starts [VIDEOS]

The OpenStack DefCore committee has been very active during this cycle turning the core definition principles into an actual list of “must-pass” capabilities (working page).  This in turn gives the community something tangible enough to review and evaluate.

Capabilities SelectionTL;DR!  We appreciate those in the community who have been patient enough to help define and learn the process we’re using the make selections; however, we also recognize that most people want to jump to the results.

This week, we started a “DefCore roadshow” with the goal of learning how to make this huge body of capabilities, process and impact easier to digest (draft write-up for review & Troy Toman’s notes).  So far we’ve had two great sessions on this topic.  We took notes and recorded at both meetups (San Francisco & Austin).

My takeaways of these initial meetups are:

  • Jump to the Capabilities right away, the process history is not needed up front
  • You need more graphics – specifically, one for the selection criteria (what do you think of my 1st attempt?)
  • Work from some examples of scored capabilities
  • Include some specific use-cases with a user, 2 types of private cloud and a public cloud to help show the impact

Overall, people like what they are hearing.  It makes sense and decisions are justified.

We need more feedback!  Please help us figure out how to explain this for the broader community.

Mayflies and Dinosaurs (extending Puppies and Cattle)

Dont Be FragileJosh McKenty and I were discussing the common misconception of the “Puppies and Cattle” analogy. His position is not anti-puppy! He believes puppies are sometimes unavoidable and should be isolated into portable containers (VMs) so they can be shuffled around seamlessly. His more provocative point is that we want our underlying infrastructure to be cattle so it remains highly elastic and flexible. More cattle means a more resilient system. To me, this is a fundamental CloudOps design objective.

We realized that the perfect cloud infrastructure would structurally discourage the creation of puppies.

Imagine a cloud in which servers were automatically decommissioned after a week of use. In a sort of anti-SLA, any VM running for more than 168 hours would be (gracefully) terminated. This would force a constant churn of resources within the infrastructure that enables true cattle-like management. This cloud would be able to very gracefully rebalance load and handle disruptive management operations because the workloads are designed for the churn.

We called these servers mayflies due to their limited life span.

While this approach requires a high degree of automation, the most successful cloud operators I have met are effectively building workloads with this requirement. If we require application workloads to be elastic and fault-resilient then we have a much higher degree of flexibility with the underlying infrastructure. I’ve seen this in practice with several OpenStack clouds: operators with helped applications deploy using automation were able to decommission “old” clouds much more gracefully. They effectively turned their entire cloud into a cow. Sadly, the ones without that investment puppified™ the ops infrastructure and created a much more brittle environment.

The opposite of a mayfly is the dinosaur: a server that is so brittle and locked that the slightest disturbance wipes out everything it touches.

Dinosaurs are puppies grown into a T-Rex with rows of massive razor sharp teeth and tiny manicured hands. These are systems that are so unique and historical that there’s no way to recreate them if there’s a failure. The original maintainers exit happy hour was celebrated by people who were laid-off two CEOs ago. The impact of dinosaurs goes beyond their operational risk; they are typically impossible to extend or maintain and, consequently, ossify other server around them. This type of server drains elasticity from your ops team.

Puppies do not grow up to become dogs, they become dinosaurs.

It’s a classic lean adage to do hard things more frequently. Perhaps it’s time to start creating mayflies in your ops infrastructure.

OpenCrowbar reaches critical milestone – boot, discover and forge on!

OpenCrowbarWe started the Crowbar project because we needed to make OpenStack deployments to be fast, repeatable and sharable.  We wanted a tool that looked at deployments as a system and integrated with our customers’ operations environment.  Crowbar was born as an MVP and quickly grew into a more dynamic tool that could deploy OpenStack, Hadoop, Ceph and other applications, but most critically we recognized that our knowledge gaps where substantial and we wanted to collaborate with others on the learning.  The result of that learning was a rearchitecture effort that we started at OSCON in 2012.

After nearly two years, I’m proud to show off the framework that we’ve built: OpenCrowbar addresses the limitations of Crowbar 1.x and adds critical new capabilities.

So what’s in OpenCrowbar?  Pretty much what we targeted at the launch and we’ve added some wonderful surprises too:

  • Heterogeneous Operating Systems – chose which operating system you want to install on the target servers.
  • CMDB Flexibility – don’t be locked in to a devops toolset.  Attribute injection allows clean abstraction boundaries so you can use multiple tools (Chef and Puppet, playing together).
  • Ops Annealer –the orchestration at Crowbar’s heart combines the best of directed graphs with late binding and parallel execution.  We believe annealing is the key ingredient for repeatable and OpenOps shared code upgrades
  • Upstream Friendly – infrastructure as code works best as a community practice and Crowbar use upstream code without injecting “crowbarisms” that were previously required.  So you can share your learning with the broader DevOps community even if they don’t use Crowbar.
  • Node Discovery (or not) – Crowbar maintains the same proven discovery image based approach that we used before, but we’ve streamlined and expanded it.  You can use Crowbar’s API outside of the PXE discovery system to accommodate Docker containers, existing systems and VMs.
  • Hardware Configuration – Crowbar maintains the same optional hardware neutral approach to RAID and BIOS configuration.  Configuring hardware with repeatability is difficult and requires much iterative testing.  While our approach is open and generic, my team at Dell works hard to validate a on specific set of gear: it’s impossible to make statements beyond that test matrix.
  • Network Abstraction – Crowbar dramatically extended our DevOps network abstraction.  We’ve learned that a networking is the key to success for deployment and upgrade so we’ve made Crowbar networking flexible and concise.  Crowbar networking works with attribute injection so that you can avoid hardwiring networking into DevOps scripts.
  • Out of band control – when the Annealer hands off work, Crowbar gives the worker implementation flexibility to do it on the node (using SSH) or remotely (using an API).  Making agents optional means allows operators and developers make the best choices for the actions that they need to take.
  • Technical Debt Paydown – We’ve also updated the Crowbar infrastructure to use the latest libraries like Ruby 2, Rails 4, Chef 11.  Even more importantly, we’re dramatically simplified the code structure including in repo documentation and a Docker based developer environment that makes building a working Crowbar environment fast and repeatable.

Why change to OpenCrowbar?  This new generation of Crowbar is structurally different from Crowbar 1 and we’ve investing substantially in refactoring the tooling, paying down technical debt and cleanup up documentation.  Since Crowbar 1 is still being actively developed, splitting the repositories allow both versions to progress with less confusion.  The majority of the principles and deployment code is very similar, I think of Crowbar as a single community.

Interested?  Our new Docker Admin node is quick to setup and can boot and manage both virtual and physical nodes.

Anyone else find picking OpenStack summit sessions overwhelming?!

Choices!The scope and diversity of sessions for the upcoming OpenStack conference in Atlanta are simply overwhelming.   As a board member, that’s a positive sign of our success as a community; however, it’s also a challenge as we attempt to pick topics.  That’s why we turn to you, the OpenStack community, to help sift and select the content.

Even if you are not attending, we need your help in selection!  Content from the summits is archived and has a much larger outside of the two conference days.  You’re voice matters for the community.

While it’s a simple matter to ask you to vote for my DefCore presentation and some excellent ones from my peers at Dell, I’d also like share some of my thoughts about general trends I saw illustrated by the offerings:

  • Swift has a strong following as a solution outside of other products
  • Ceph seems to emerging as a critical component with Cinder
  • Neutron has breath but not depth in practice
  • HA and Upgrades remain challenges
  • We are starting to see specializations emerge (like NFV)
  • OpenStack case studies!  There are many – some of uncertain utility as references
  • Some community members and companies are super prolific in submitting sessions.  Perhaps these sessions are all great but on first pass it seems out of balance.
  • Vendor pitch or conference session?  You often get both in the same session.  We’re still not certain how to balance this.

The number and diversity of sessions is staggering – we need your help on voting.

We also need you to be part of the dialog about the conference and summits to make sure they are meeting the community needs.  My review of the sessions indicates that we are trying to serve many different audiences in a very limited time window.  I’m interested in hearing yours!  Review some sessions and let me know.

OpenStack Board Elections: What I’ll do in 2014: DefCore, Ops, & Community

Rob HirschfeldOpenStack Community,

The time has come for you to choose who will fill the eight community seats on the Board (ballot links went out Sunday evening CST).  I’ve had the privilege to serve you in that capacity for 16 months and would like to continue.  I have leadership role in Core Definition and want to continue that work.

Here are some of the reasons that I am a strong board member:

  • Proven & Active Leadership on Board – I have been very active and vocal representing the community on the Board.  In addition to my committed leadership in Core Definition, I have played important roles shaping the Gold Member grooming process and trying to adjust our election process.  I am an outspoken yet pragmatic voice for the community in board meetings.
  • Technical Leader but not on the TC – The Board needs members who are technical yet detached from the individual projects enough to represent outside and contrasting views.
  • Strong User Voice – As the senior OpenStack technologist at Dell, I have broad reach in Dell and RedHat partnership with exposure to a truly broad and deep part of the community.  This makes me highly accessible to a lot of people both in and entering the community.
  • Operations Leadership – Dell was an early leader in OpenStack Operations (via OpenCrowbar) and continues to advocate strongly for key readiness activities like upgrade and high availability.  In addition, I’ve led the effort to converge advanced cookbooks from the OpenCrowbar project into the OpenStack StackForge upstreams.  This is not a trivial effort but the right investment to make for our community.
  • And there’s more… you can read about my previous Board history in my 2012 and 2013 “why vote for me” posts or my general OpenStack comments.

And now a plea to vote for other candidates too!

I had hoped that we could change the election process to limit blind corporate affinity voting; however, the board was not able to make this change without a more complex set of bylaws changes.  Based on the diversity and size of OpenStack community, I hope that this issue may no longer be a concern.  Even so, I strongly believe that the best outcome for the OpenStack Board is to have voters look beyond corporate affiliation and consider a range of factors including business vs. technical balance, open source experience, community exposure, and ability to dedicate time to OpenStack.