a Ready State analogy: “roughed in” brings it Home for non-ops-nerds

I’ve been seeing great acceptance on the concept of ops Ready State.  Technologists from both ops and dev immediately understand the need to “draw a line in the sand” between system prep and installation.  We also admit that getting physical infrastructure to Ready State is largely taken for granted; however, it often takes multiple attempts to get it right and even small application changes can require a full system rebuild.

Since even small changes can redefine the ready state requirements, changing Ready State can feel like being told to tear down your house so you remodel the kitchen.

Foundation RawA friend asked me to explain “Ready State” in non-technical terms.  So far, the best analogy that I’ve found is when a house is “Roughed In.”  It’s helpful if you’ve ever been part of house construction but may not be universally accessible so I’ll explain.

Foundation PouredGetting to Rough In means that all of the basic infrastructure of the house is in place but nothing is finished.  The foundation is poured, the plumbing lines are placed, the electrical mains are ready, the roof on and the walls are up.  The house is being built according to architectural plans and major decisions like how many rooms there are and the function of the rooms (bathroom, kitchen, great room, etc).  For Ready State, that’s like having the servers racked and setup with Disk, BIOS, and network configured.

Framed OutWhile we’ve built a lot, rough in is a relatively early milestone in construction.  Even major items like type of roof, siding and windows can still be changed.  Speaking of windows, this is like installing an operating system in Ready State.  We want to consider this as a distinct milestone because there’s still room to make changes.  Once the roof and exteriors are added, it becomes much more disruptive and expensive to make.

Roughed InOnce the house is roughed in, the finishing work begins.  Almost nothing from roughed in will be visible to the people living in the house.  Like a Ready State setup, the users interact with what gets laid on top of the infrastructure.  For homes it’s the walls, counters, fixtures and following.  For operators, its applications like Hadoop, OpenStack or CloudFoundry.

Taking this analogy back to where we started, what if we could make rebuilding an entire house take just a day?!  In construction, that’s simply not practical; however, we’re getting to a place in Ops where automation makes it possible to reconstruct the infrastructure configuration much faster.

While we can’t re-pour the foundation (aka swap out physical gear) instantly, we should be able to build up from there to ready state in a much more repeatable way.

OpenCrowbar Design Principles: Emergent services [Series 5 of 6]

This is part 5 of 6 in a series discussing the principles behind the “ready state” and other concepts implemented in OpenCrowbar.  The content is reposted from the OpenCrowbar docs repo.

Emergent services

We see data center operations as a duel between conflicting priorities. On one hand, the environment is constantly changing and systems must adapt quickly to these changes. On the other hand, users of the infrastructure expect it to provide stable and consistent services for consumption. We’ve described that as “always ready, never finished.”

Our solution to this duality to expect that the infrastructure Crowbar builds is decomposed into well-defined service layers that can be (re)assembled dynamically. Rather than require any component of the system to be in a ready state, Crowbar design principles assume that we can automate the construction of every level of the infrastructure from bios to network and application. Consequently, we can hold off (re)making decisions at the bottom levels until we’ve figured out that we’re doing at the top.

Effectively, we allow the overall infrastructure services configuration to evolve or emerge based on the desired end use. These concepts are built on computer science principles that we have appropriated for Ops use; since we also subscribe to Opscode “infrastructure as code”, we believe that these terms are fitting in a DevOps environment. In the next pages, we’ll explore the principles behind this approach including concepts around simulated annealing, late binding, attribute injection and emergent design.

Emergent (aka iterative or evolutionary) design challenges the traditional assumption that all factors must be known before starting

  • Dependency graph – multidimensional relationship
  • High degree of reuse via abstraction and isolation of service boundaries.
  • Increasing complexity of deployments means more dependencies
  • Increasing revision rates of dependencies but with higher stability of APIs

OpenCrowbar Design Principles: Late Binding [Series 3 of 6]

This is part 3 of 6 in a series discussing the principles behind the “ready state” and other concepts implemented in OpenCrowbar.  The content is reposted from the OpenCrowbar docs repo.

2013-09-13_18-56-39_197Ops Late Binding

In terms of computer science languages, late binding describes a class of 4th generation languages that do not require programmers to know all the details of the information they will store until the data is actually stored. Historically, computers required very exact and prescriptive data models, but later generation languages embraced a more flexible binding.

Ops is fluid and situational.

Many DevOps tooling leverages eventual consistency to create stable deployments. This iterative approach assumes that repeated attempts of executing the same idempotent scripts do deliver this result; however, they are do not deliver predictable upgrades in situations where there are circular dependencies to resolve.

It’s not realistic to predict the exact configuration of a system in advance –

  • the operational requirements recursively impact how the infrastructure is configured
  • ops environments must be highly dynamic
  • resilience requires configurations to be change tolerant

Even more complex upgrade where the steps cannot be determined in advanced because the specifics of the deployment direct the upgrade.

Late Binding is a  foundational topic for Crowbar that we’ve been talking about since mid-2012.  I believe that it’s an essential operational consideration to handle resiliency and upgrades.  We’ve baked it deeply into OpenCrowbar design.

Continue Reading > post 4

OpenCrowbar Design Principles: The Ops Challenge [Series 2 of 6]

This is part 2 of 6 in a series discussing the principles behind the “ready state” and other concepts implemented in OpenCrowbar.  The content is reposted from the OpenCrowbar docs repo.

The operations challenge

A deployment framework is key to solving the problems of deploying, configuring, and scaling open source clusters for cloud computing.

2012-09-21_13-51-00_331Deploying an open source cloud can be a complex undertaking. Manual processes, can take days or even weeks working to get a cloud fully operational. Even then, a cloud is never static, in the real world cloud solutions are constantly on an upgrade or improvement path. There is continuous need to deploy new servers, add management capabilities, and track the upstream releases, while keeping the cloud running, and providing reliable services to end users. Service continuity requirements dictate a need for automation and orchestration. There is no other way to reduce the cost while improving the uptime reliability of a cloud.

These were among the challenges that drove the development of the OpenCrowbar software framework from it’s roots as an OpenStack installer into a much broader orchestration tool. Because of this evolution, OpenCrowbar has a number of architectural features to address these challenges:

  • Abstraction Around OrchestrationOpenCrowbar is designed to simplify the operations of large scale cloud infrastructure by providing a higher level abstraction on top of existing configuration management and orchestration tools based on a layered deployment model.
  • Web ArchitectureOpenCrowbar is implemented as a web application server, with a full user interface and a predictable and consistent REST API.
  • Platform Agnostic ImplementationOpenCrowbar is designed to be platform and operating system agnostic. It supports discovery and provisioning from a bare metal state, including hardware configuration, updating and configuring BIOS and BMC boards, and operating system installation. Multiple operating systems and heterogeneous operating systems are supported. OpenCrowbar enables use of time-honored tools, industry standard tools, and any form of scriptable facility to perform its state transition operations.
  • Modular ArchitectureOpenCrowbar is designed around modular plug-ins called Barclamps. Barclamps allow for extensibility and customization while encapsulating layers of deployment in manageable units.
  • State Transition Management EngineThe core of OpenCrowbar is based on a state machine (we call it the Annealer) that tracks nodes, roles, and their relationships in groups called deployments. The state machine is responsible for analyzing dependencies and scheduling state transition operations (transitions).
  • Data modelOpenCrowbar uses a dedicated database to track system state and data. As discovery and deployment progresses, system data is collected and made available to other components in the system. Individual components can access and update this data, reducing dependencies through a combination of deferred binding and runtime attribute injection.
  • Network AbstractionOpenCrowbar is designed to support a flexible network abstraction, where physical interfaces, BMC’s, VLANS, binding, teaming, and other low level features are mapped to logical conduits, which can be referenced by other components. Networking configurations can be created dynamically to adapt to changing infrastructure.

Continue Reading > post 3

Networking in Cloud Environments, SDN, NFV, and why it matters [part 1 of 2]

scott_jensen2Scott Jensen is an Engineering Director and colleague of mine from Dell with deep networking and operations experience.  He had first hand experience deploying OpenStack and Hadoop and has a critical role in defining Dell’s Reference Architectures in those areas.  When I saw this writeup about cloud networking, I asked if it would be OK to share it with you.

Guest Post 1 of 2 by Scott Jensen:

Having a basis in enterprise data center networking, Cloud computing I have many conversations with customers implementing a cloud infrastructure.  Their design the networking infrastructure can and should be different from a classic network configuration and many do not understand why.  Either due to a lack of knowledge in networking or due to a lack of understanding as to why cloud computing is different from virtualization.  Once you have an understanding of both of these areas you can begin to see why emerging technologies such as SDN (Software Defined Networking) and NFV (Network Function Virtualization) begin to address some of the issues that Cloud Computing can cause with your network.

Networking is all about traffic flows.  In order to properly design your infrastructure you need to understand where traffic is originating, where it is going and how much traffic will be following a specific route and at what times.

There are many differences between Cloud Computing and virtualization.  In many cases people I will talk to think of Cloud as virtualization in a different environment.  Of course this will work just fine however it does not take advantage of the goodness that a Cloud infrastructure can bring.  Some of the major differences between Virtualization and Cloud Computing have profound effects on how the network is utilized.  This all has to do with the application.  That is really what it is all about anyway.  Rob Hirschfeld has a great post on the difference between Pets and Cattle which describes this well.

Pets and Cattle as a workload evolution

In typical virtualized infrastructures, the applications have a fairly common pattern.  Many people describe these as Pets and are managed largely the same as a physical system.  They have a name, they are one of a kind, they are cared for, and when the die it can be traumatic (I know I have been there).

  • They run on large stateful VMs
  • They have a lifecycle which is typically very long such as years
  • The applications themselves are not designed to tolerate failures.  Other technologies are brought in to ensure uptime.
  • The application is scaled up when demands increase.  This is done by adding more memory or CPU to the VM.

Cloud applications are different.  Some people describe them as cattle and they are treated like cattle in many ways.  They do not necessarily have a name and if one dies it is sad but not a really big deal.  We should probably figure out what killed it but life goes on.

  • They run on smaller stateless VMs
  • They have a lifecycle measured in hours or months.  Sometimes even less than an hour.
  • The application is designed to expect failures
  • The application scales out by increasing the number of instances which is running when the demand increases.

In his follow-up post next week, Scott discusses how this impacts the network and how SDN and NFV promises to help.

 

 

My insights from OpenStack “what is core” Spider > we need pluggable architectures

THIS POST IS #3 IN A SERIES ABOUT “WHAT IS CORE.”

IdeasSo what did we learn from the spider map exercise?  Above all else, the spider confirmed that to me that OpenStack is a world of paradox.  The perfect definition of core may be elusive, but I believe we can find one that is sufficient.

The goal was understanding not philosophical truth.  In a diverse and vibrant community with many objectives, understanding leads to consensus while being “right” can become very lonely.

Since our goal was not to answer the question, what did we want to accomplish?  Spider success was defined as creating a framework, really a list of agreed positions (post #4), that narrows the scope of the “what is core” dialog.

Too vague a framework leads to uncertainty about what’s included, stable and working while too rigid a baseline could drive away innovation and lead to forking.  Being too aggressively open could discourage commercial investment yet too proprietary an approach contradicts our collaboration and community values.

Having a workable framework that accommodates these diverse positions allows us to move forward.

So what did Alan and I learn from the spider to help the discussion?

  • “Plug-ins” are essential to the definition of core because they create safe places for innovation  [note: there has been much refinement of what "plug-in" means here]
  • It is possible to balance between stability and innovation if we have a way to allow implementations to evolve
  • OpenStack has a significant commercial ecosystem that needs to be accommodated in core
  • We need an approach that allows extension and improvement without having to incubate new projects
  • We need to ensure that we use brand and culture to combat forking
  • Interoperability is a worthy goal
  • Everyone thinks testing is good, but it’s still a sidebar
  • There are multiple distinct audiences with conflicting goals: some want stability and durability while others want innovation and flexibility.

Of these insights, the need to discuss how OpenStack promotes a plug-in architecture seemed address the most points of tension.  [update: in the course of discussion, we've defining plug-in more generally to be something like "a designated section of implementation code that can be altered without negatively changing the base function of the project."]

This is not the only item worth discussing, but it was the one that made the most sense to cover first based on the spider map.  Our idea was that having the community find agreement on how we approach plug-ins would lead us closer to a common ground for the “what is core” discussion.

Finding a common thread shrinks the problem space so the Board, TC and Community can advance in discussion.  So far, that assessment has proven accurate enough to move the dialog forward.

READ POST IS #4: TWELVE POSITIONS

DevOps approaches to upgrade: Cube Visualization

I’m working on my OpenStack summit talk about DevOps upgrade patterns and got to a point where there are three major vectors to consider:

  1. Step Size (shown as X axis): do we make upgrades in small frequent steps or queue up changes into larger bundles? Larger steps mean that there are more changes to be accommodated simultaneously.
  2. Change Leader (shown as Y axis): do we upgrade the server or the client first? Regardless of the choice, the followers should be able to handle multiple protocol versions if we are going to have any hope of a reasonable upgrade.
  3. Safeness (shown as Z axis): do the changes preserve the data and productivity of the entity being upgraded? It is simpler to assume to we simply add new components and remove old components; this approach carries significant risks or redundancy requirements.

I’m strongly biased towards continuous deployment because I think it reduces risk and increases agility; however, I laying out all the vertices of the upgrade cube help to visualize where the costs and risks are being added into the traditional upgrade models.

Breaking down each vertex:

  1. Continuous Deploy – core infrastructure is updated on a regular (usually daily or faster) basis
  2. Protocol Driven – like changing to HTML5, the clients are tolerant to multiple protocols and changes take a long time to roll out
  3. Staged Upgrade – tightly coordinate migration between major versions over a short period of time in which all of the components in the system step from one version to the next together.
  4. Rolling Upgrade – system operates a small band of versions simultaneously where the components with the oldest versions are in process of being removed and their capacity replaced with new nodes using the latest versions.
  5. Parallel Operation – two server systems operate and clients choose when to migrate to the latest version.
  6. Protocol Stepping – rollout of clients that support multiple versions and then upgrade the server infrastructure only after all clients have achieved can support both versions.
  7. Forced Client Migration – change the server infrastructure and then force the clients to upgrade before they can reconnect.
  8. Big Bang – you have to shut down all components of the system to upgrade it

This type of visualization helps me identify costs and options. It’s not likely to get much time in the final presentation so I’m hoping to hear in advance if it resonates with others.

PS: like this visualization? check out my “magic 8 cube” for cloud hosting options.

Continuous Release combats disruptions of “Free Fall” development

Since I posted the “Free Fall” development post, I’ve been thinking a bit about the pros and cons of this type of off-release development.

The OpenStack Swift project does not do free fall because they are on a constant “ship ready” state for the project and only loosely flow the broader OpenStack release track.  My team at Dell also has minimal free fall development because we have a more frequent release clock and choose to have the team focus together through dev/integrate/harden cycles as much as possible.

From a Lean/Agile/CI perspective, I would work to avoid hidden development where possible.  New features are introduced by split test (they are in the code, but not active for most users) so that the all changes in incremental.  That means that refactoring, rearchitecture and new capabilities appear less disruptively.  While it may this approach appears to take more effort in the short term; my experience is that it accelerates delivery because we are less likely to over develop code.

Unfortunately, free fall development has the opposite effect.  Having code that appears in big blocks is contrary to best practices in my opinion.  Further, it rewards groups that work asynchronously and

While I think that OpenStack benefits from free fall work, I think that it is ultimately counter-productive.

Why doesn’t Chef call them “bowls” instead of roles?

The extended Crowbar team (my employer Dell and community) recently had a bit of a controversy heated discussion over the renaming of “proposals” to “configurations.” It was pretty clear that the term “proposal” confused users because an “active proposal” seems like a bit of an oxymoron. Excepting Scott Jensen, our schedule-czar and director of engineering, we had relatively few die-hard “I love proposal” advocates; however, deciding on an alternative was not quite so easy.

Ultimately, the question came down to “do we use an invented term or an intuitive term.”  The answer lies in when to use or avoid the congruity theory as articulated by Roger Cauvin.

We considered many alternatives like calling them “fixtures” to go along with the Crowbar & Barclamp tool theme. Even “Chuck Norris” was considered until copyright issues were flagged. The top alternative, “configuration” seemed just too bland. Frankly and amazingly, we originally considered it “too descriptive!”

The crux of the argument really revolved around the users’ ability to intuitively grasp a concept or to force them learn a new term. For example, we specifically chose “barclamp” instead of “module” because we felt that there were more components to a barclamp than just being a Crowbar module. In many ways, module would be sufficiently descriptive; however, we saw that there was benefit to the user tax in introducing a new term. It also fit nicely within our tool theme.

Opscode Chef is an example of investing heavily in a naming theme. For example, the concept of “cookbooks” and “recipes” seems relatively intuitive for users but starts getting stretched for “knife” because it is not immediately clear to users what that component does (it executes instructions on nodes and the server). After learning Chef, I appreciate “knife” as the universal tool but still remember having to figure it out.

A good theme is awesome, but it can quickly encumber usability.

For example, what if Chef has used “bowl” instead of role. It’s logical: you put a group of ingredients to mix into a bowl that acts as a container. While it may be logical to the initiated, it mainly extends the learning curve for new users. A role is a commonly accepted term for an operational classification so it is a much better term for users. The same is true for “node” and “data bag.”

I love a good incongruent theme as much as any meme-enabled tech geek but themes must not hinder usability. After all, we all fight for the users.

The real workloads begin: Crowbar’s Sophomore Year

Given Crowbar‘s frenetic Freshman year, it’s impossible to predict everything that Crowbar could become. I certainly aspire to see the project gain a stronger developer community and the seeds of this transformation are sprouting. I also see that community driven work is positioning Crowbar to break beyond being platforms for OpenStack and Apache Hadoop solutions that pay the bills for my team at Dell to invest in Crowbar development.

I don’t have to look beyond the summer to see important development for Crowbar because of the substantial goals of the Crowbar 2.0 refactor.

Crowbar 2.0 is really just around the corner so I’d like to set some longer range goals for our next year.

  • Growing acceptance of Crowbar as an in data center extension for DevOps tools (what I call CloudOps)
  • Deeper integration into more operating environments beyond the core Linux flavors (like virtualization hosts, closed and special purpose operating systems.
  • Improvements in dynamic networking configuration
  • Enabling more online network connected operating modes
  • Taking on production ops challenges of scale, high availability and migration
  • Formalization of our community engagement with summits, user groups, and broader developer contributions.

For example, Crowbar 2.0 will be able to handle downloading packages and applications from the internet. Online content is not a major benefit without being able to stage and control how those new packages are deployed; consequently, our goals remains tightly focused improvements in orchestration.

These changes create a foundation that enables a more dynamic operating environment. Ultimately, I see Crowbar driving towards a vision of fully integrated continuous operations; however, Greg & Rob’s Crowbar vision is the topic for tomorrow’s post.