OpenCrowbar Design Principles: Reintroduction [Series 1 of 6]

While “ready state” as a concept has been getting a lot of positive response, I forget that much of the innovation and learning behind that concept never surfaced as posts here.  The Anvil (2.0) release included the OpenCrowbar team cataloging our principles in docs.  Now it’s time to repost the team’s work into a short series over the next three days.

In architecting the Crowbar operational model, we’ve consistently twisted adapted traditional computer science concepts like late binding, simulated annealing, emergent behavior, attribute injection and functional programming to create a repeatable platform for sharing open operations practice (post 2).

Functional DevOps aka “FuncOps”

Ok, maybe that’s not going to be the 70’s era hype bubble name, but… the operational model behind Crowbar is entering its third generation and its important to understand the state isolation and integration principles behind that model is closer to functional than declarative programming.

Parliament is Crowbar’s official FuncOps sound track

The model is critical because it shapes how Crowbar approaches the infrastructure at a fundamental level so it makes it easier to interact with the platform if you see how we are approaching operations. Crowbar’s goal is to create emergent services.

We’ll expore those topics in this series to explain Crowbar’s core architectural principles.  Before we get into that, I’d like to review some history.

The Crowbar Objective

Crowbar delivers repeatable best practice deployments. Crowbar is not just about installation: we define success as a sustainable operations model where we continuously improve how people use their infrastructure. The complexity and pace of technology change is accelerating so we must have an approach that embraces continuous delivery.

Crowbar’s objective is to help operators become more efficient, stable and resilient over time.

Background

When Greg Althaus (github @GAlhtaus) and Rob “zehicle” Hirschfeld (github @CloudEdge) started the project, we had some very specific targets in mind. We’d been working towards using organic emergent swarming (think ants) to model continuous application deployment. We had also been struggling with the most routine foundational tasks (bios, raid, o/s install, networking, ops infrastructure) when bringing up early scale cloud & data applications. Another key contributor, Victor Lowther (github @VictorLowther) has critical experience in Linux operations, networking and dependency resolution that lead to made significant contributions around the Annealing and networking model. These backgrounds heavily influenced how we approached Crowbar.

First, we started with best of field DevOps infrastructure: Opscode Chef. There was already a remarkable open source community around this tool and an enthusiastic following for cloud and scale operators . Using Chef to do the majority of the installation left the Crowbar team to focus on

crowbar_engineKey Features

  • Heterogeneous Operating Systems – chose which operating system you want to install on the target servers.
  • CMDB Flexibility (see picture) – don’t be locked in to a devops toolset. Attribute injection allows clean abstraction boundaries so you can use multiple tools (Chef and Puppet, playing together).
  • Ops Annealer –the orchestration at Crowbar’s heart combines the best of directed graphs with late binding and parallel execution. We believe annealing is the key ingredient for repeatable and OpenOps shared code upgrades
  • Upstream Friendly – infrastructure as code works best as a community practice and Crowbar use upstream code
  • without injecting “crowbarisms” that were previously required. So you can share your learning with the broader DevOps community even if they don’t use Crowbar.
  • Node Discovery (or not) – Crowbar maintains the same proven discovery image based approach that we used before, but we’ve streamlined and expanded it. You can use Crowbar’s API outside of the PXE discovery system to accommodate Docker containers, existing systems and VMs.
  • Hardware Configuration – Crowbar maintains the same optional hardware neutral approach to RAID and BIOS configuration. Configuring hardware with repeatability is difficult and requires much iterative testing. While our approach is open and generic, the team at Dell works hard to validate a on specific set of gear: it’s impossible to make statements beyond that test matrix.
  • Network Abstraction – Crowbar dramatically extended our DevOps network abstraction. We’ve learned that a networking is the key to success for deployment and upgrade so we’ve made Crowbar networking flexible and concise. Crowbar networking works with attribute injection so that you can avoid hardwiring networking into DevOps scripts.
  • Out of band control – when the Annealer hands off work, Crowbar gives the worker implementation flexibility to do it on the node (using SSH) or remotely (using an API). Making agents optional means allows operators and developers make the best choices for the actions that they need to take.
  • Technical Debt Paydown – We’ve also updated the Crowbar infrastructure to use the latest libraries like Ruby 2, Rails 4, Chef 11. Even more importantly, we’re dramatically simplified the code structure including in repo documentation and a Docker based developer environment that makes building a working Crowbar environment fast and repeatable.

OpenCrowbar (CB2) vs Crowbar (CB1)?

Why change to OpenCrowbar? This new generation of Crowbar is structurally different from Crowbar 1 and we’ve investing substantially in refactoring the tooling, paying down technical debt and cleanup up documentation. Since Crowbar 1 is still being actively developed, splitting the repositories allow both versions to progress with less confusion. The majority of the principles and deployment code is very similar, I think of Crowbar as a single community.

Continue Reading > post 2

Ballistic Release Cycles: Tracking the Trajectory of OpenStack Milestones

I’ve been watching a pattern emerge on the semiannual OpenStack release cycles for a while now. There is a hidden but crucial development phase that accelerates projects faster than many observers realize. In fact, I believe that substantial work is happening outside of the “normal” design cycle during what I call “free fall” development.

Understanding when the cool, innovative stuff happens is essential to getting (and giving) the most from OpenStack.

The published release cycle looms like a 6 stage ballistic trajectory. Launching at the design summit, the release features change and progress the most in the first 3 milestones. At the apogee of the release, maximum velocity is reached just as we start having to decide which features are complete enough to include in the release. Since many are not ready, we have to jettison (really, defer) partial work to ensure that we can land the release on schedule.

I think of the period where we lose potential features as free fall because thing can go in any direction. The release literally reverses course: instead of expanding, it is contracting. This process is very healthy for OpenStack. It favors code stability and “long” hardening times. For operators, this means that the code stops changing early enough that we have more time to test and operationalize the release.

But what happens to the jettisoned work? In free fall, objects in motion stay in motion. The code does not just disappear! It continues on its original upward trajectory.

The developers who invested time in the code do not simply take a 3 month sabbatical, nor do they stop their work and start testing the code that was kept. No, after the short in/out sorting pause, the free fall work continues onward with rockets blasting. The challenge is that it is now getting outside of the orbit of the release plan and beyond the radar of many people who are tracking the release.

The consequence of this ongoing development is that developers (and the features they are working on) show up at the summit with 3 extra months of work completed. It also means that OpenStack starts each release cycle with a bucket of operationally ready code. Wow, that’s a huge advantage for the project in terms of delivered work, feature velocity and innovation. Even better, it means that the design summit can focus on practical discussions of real prototypes and functional features.

Unfortunately, this free fall work has hidden costs:

  • It is relatively hidden because it is outside of the normal release cycle.
  • It makes true design discussions less productive because the implemented code is more likely to make the next release cycle
  • Integration for the work is postponed because it continues before branching
  • Teams that are busy hardening a core feature can be left out of work on the next iteration of the same feature
  • Forking can make it hard to capture bugs caught during hardening

I think OpenStack greatly benefits from free fall development; consequently, I think we need to acknowledge and embrace it to reduce its costs. A more explicit mid-release design synchronization when or before we fork may help make this hidden work more transparent.

Open Community Access to Crowbar 2 Efforts

We’re moving along on the Crowbar2 refactoring work (`/release/rails3anddb/master` branch for now) and it’s time to start making it easier for you to participate if you are interested.

We are planning to start having TWO weekly community sprint meetings.

NOTE: Times can still shift depending on community input! We are trying to a truly global community so we need your input.

The weekly design discussions will be on Tuesdays @ 10am Central (GMT -6). The topics will be relevant to the coming sprint and we expect dialog. The topics will be determined by Greg Althaus based on progress on the refactor. You’re welcome to contact him or the list with suggestions.

The purpose of these meetings is to

  • discuss/resolve design related to the refactor
  • Document use-cases
  • identify issues that need to be addressed in the next sprint

The weekly coordination meetings will be on Thursdays @ 8am Central (GMT -6). We want to respect everyone’s time and will strictly limit these to 1 hour. This meeting is in between our internal sprint review and planning so we have flexibility to adjust our plans based on your input. It is important to us that we make it possible for you to contribute and we need your input to make sure that you are not blocked!

The coordination meetings will be structured as follows (times approximate)

  • Voice & Screencast: https://join.me/dellcrowbar
  • 25 minutes for review of current progress
  • 10 minutes for feedback/adjustments on process and workflow
  • 25 minutes for planning of next sprint
  • Online discussion & notes on the http://crowbar.sync.in/sprintMMDD etherpads
  • Identification of working groups for further discussion and coding collaboration

These meetings are the primary way that we will be making sure the community is not blocked by our development efforts.

The purpose of these meetings is

  • to synchronize quickly so that we can connect the people who should be collaborating
  • eliminate blocking items for Crowbar contributors

Of course, we will attempt to record and post all of these meetings.

Stay tuned! We will likely announce additional meetings for community collaboration.

PS: These are design meetings, they are NOT Crowbar training meetings. Please consult http://bit.ly/crowbarwiki for links and videos about learning Crowbar.

Crowbar 2.0 Objectives: Scalable, Heterogeneous, Flexible and Connected

The seeds for Crowbar 2.0 have been in the 1.x code base for a while and were recently accelerated by SuSE.  With the Dell | Cloudera 4 Hadoop and Essex OpenStack-powered releases behind us, we will now be totally focused bringing these seeds to fruition in the next two months.

Getting the core Crowbar 2.0 changes working is not a major refactoring effort in calendar time; however, it will impact current Crowbar developers by changing improving the programming APIs. The Dell Crowbar team decided to treat this as a focused refactoring effort because several important changes are tightly coupled. We cannot solve them independently without causing a larger disruption.

All of the Crowbar 2.0 changes address issues and concerns raised in the community and are needed to support expanding of our OpenStack and Hadoop application deployments.

Our technical objective for Crowbar 2.0 is to simplify and streamline development efforts as the development and user community grows. We are seeking to:

  1. simplify our use of Chef and eliminate Crowbar requirements in our Opscode Chef recipes.
    1. reduce the initial effort required to leverage Crowbar
    2. opens Crowbar to a broader audience (see Upstreaming)
  2. provide heterogeneous / multiple operating system deployments. This enables:
    1. multiple versions of the same OS running for upgrades
    2. different operating systems operating simultaneously (and deal with heterogeneous packaging issues)
    3. accommodation of no-agent systems like locked systems (e.g.: virtualization hosts) and switches (aka external entities)
    4. UEFI booting in Sledgehammer
  3. strengthen networking abstractions
    1. allow networking configurations to be created dynamically (so that users are not locked into choices made before Crowbar deployment)
    2. better manage connected operations
    3. enable pull-from-source deployments that are ahead of (or forked from) available packages.
  4. improvements in Crowbar’s core database and state machine to enable
    1. larger scale concerns
    2. controlled production migrations and upgrades
  5. other important items
    1. make documentation more coupled to current features and easier to maintain
    2. upgrade to Rails 3 to simplify code base, security and performance
    3. deepen automated test coverage and capabilities

Beyond these great technical targets, we want Crowbar 2.0 is to address barriers to adoption that have been raised by our community, customers and partners. We have been tracking concerns about the learning curve for adding barclamps, complexity of networking configuration and packaging into a single ISO.

We will kick off to community part of this effort with an online review on 7/16 (details).

PS: why a refactoring?

My team at Dell does not take on any refactoring changes lightly because they are disruptive to our community; however, a convergence of requirements has made it necessary to update several core components simultaneously. Specifically, we found that desired changes in networking, operating systems, packaging, configuration management, scale and hardware support all required interlocked changes. We have been bringing many of these changes into the code base in preparation and have reached a point where the next steps require changing Crowbar 1.0 semantics.

We are first and foremost an incremental architecture & lean development team – Crowbar 2.0 will have the smallest footprint needed to begin the transformations that are currently blocking us. There is significant room during and after the refactor for the community to shape Crowbar.

#OpenStack Blueprint for Cloud Installer (#crowbar, #apache2)

Tonight I submitted a formal OpenStack Common blue print for Crowbar as a cloud installerMy team at Dell considers this to be our first step towards delivering the code as open source (next few weeks) and want to show the community the design thinking behind the project.  Crowbar currently only embodies a fraction of this scope but we have designed it looking forward.

I’ve copied the text of our inital blueprint here until it is approved.  The living document will be maintained at the OpenStack launch pad and I will update links appropriately.

Here’s what I submitted:

Note: Installer is used here because of convention. The scope of this blue print is intended to include expansion and maintenance of the OpenStack infrastructure. 

Summary

This blueprint creates a common installation system for OpenStack infrastructure and components. The installer should be able to discover and configure physical equipment (servers,switches, etc) and then deploy the OpenStack software components in an optimum way for the discovered infrastructure. Minimum manual steps should be needed for setup and maintenance of the system.

Users should be able to leverage and contribute to components of the system without deploying 100% of the system. This encourages community collaboration. For example, installation scripts that deploy and configure OpenStack components should be usable without using bare metal configuration and vice-versa.

The expected result will be installations that are 100% automated after racking gear with no individual touch of any components.

This means that the installer will be able to

  • expand physical capacity
  • update of software components
  • addition of new software components
  • cope with heterogeneous environments (hardware, OpenStack components, hyper-visors, operating systems, etc)
  • handle rolling upgrades (due to the scale of OpenStack target deployments)

 

Release Note

Not currently released. Reference code (“Crowbar”) to be delivered by Dell via GitHub .

Rationale / Problem Statement

While a complete deployment system is an essential component to ensure adoption, it also fosters sharing and encoding of operational methods by the community. This follows and “Open Ops” strategy that encourages OpenStack users to create and share best practices.

The installer addresses the following needs

  • Community collaboration on deployment scripts and architecture.
  • Bare metal installation – this is different, but possibly related to Nova bare metal provisioning
  • OpenStack is evolving (Ops Model, CloudOps )
  • Provide a common installation platform to facilitate consistent deployments

It is important that the installer does NOT

  • constrain architecture to limit scale
  • create extra effort to re-balance as system capacity grows

This design includes an “Ops Infrastructure API” for use by other components and services. This REST API will allow trusted applications to discover and inspect the operational infrastructure to provide additional services. The API should expose

  • Managed selection of components & requests
  • Expose internal infrastructure (not for customer use, but to enable Ops tools)
    • networks
    • nodes
    • capacity
    • configuration

 

Assumptions

 

  • OpenStack code base will not limit development based on current architecture practices. Cloud architectures will need to adopt
  • Expectation to use IP-based system management tools to provide out of band reboot and power controls.

 

Design

The installation process has multiple operations phases: 1) bare metal provisioning, 2) component deployment, and 3) upgrade/redeployment. While each phase is distinct, they must act in a coordinated way.

A provisioning state machine (PSM) is a core concept for this overall installation architecture. The PSM must be extensible so that new capabilities and sequences can be added.

It is important that installer support IPv6 as an end state. It is not required that the entire process be IPv4 or IPv6 since changing address schema may be desirable depending on the task to be performed.

Modular Design Objective

  • should have a narrow focus for installation – a single product or capability.
  • may have pre-requisites or dependencies but as limited as possible
  • should have system, zone, and node specific configuration capabilities
  • should not interfere with operation of other modules

 

Phase 1: Bare Metal Provisioning

  • For each node:
    • Entry State: unconfigured hardware with network connectivity and PXE boot enabled.
    • Exit State: minimal node config (correct operating system installed, system named and registered, checked into OpenStack install manager)

The core element for Phase 1 is a “PXE State Machine” (a subset of the PSM) that orchestrates node provisioning through multiple installation points. This allows different installation environments to be used while the system is prepared for it’s final state. These environments may include BIOS & RAID configuration, diagnostics, burn-in, and security validation.

It is anticipated that nodes will pass through phase 1 provisioning FOR EACH boot cycle. This allows the Installation Manager to perform any steps that may be dictated based on the PSM. This could include diagnostic and security checks of the physical infrastructure.

Considerations:

  • REST API for updating to new states from nodes
  • PSM changes PXE image based on state updates
  • PSM can use IPMI to force power changes
  • DHCP reservations assigned by MAC after discovery so nodes have a predictable IP
  • Phase 1 images may change IP addresses during this phase.
  • Discovery phase would use short term DHCP addresses. The size of the DHCP lease pool may be restricted but should allow for provisioning a rack of nodes at a time.
  • Configuration parameters for Phase 1 images can be passed
    • via DHCP properties (preferred)
    • REST data
  • Discovery phase is expected to set the FQDN for the node and register it with DNS

 

Phase 2: Component Deployment

  • Entry State: set of nodes in minimal configuration (number required depends on components to deploy, generally >=5)
  • Requirements:
  • Exit State: one or more

During Phase 2, the installer must act on the system as a whole. The focus shifts from single node provisioning, to system level deployment and configuration.

Phase 2 extends the PSM to comprehend the dependencies between system components. The use of a state machine is essential because system configuration may require that individual nodes return to Phase 1 in order to change their physical configuration. For example, a node identified for use by Swift may need to be setup as a JBOD while the same node could be configured as RAID 10 for Nova. The PSM would also be used to handle inter-dependencies between components that are difficult to script in stages such as rebalancing a Swift ring.

Considerations:

  • Deployments must be infrastructure aware so they can take network topology, disk capacity, fault zones, and proximity into account.
  • System must generate a reviewable proposal for roles nodes will perform.
  • Roles (nodes may have >1 role) define OS & prerequisite components that execute on on nodes
  • Operations on nodes should be omnipotent for individual actions (multiple state operations will violate this principle by definition)
  • System wide configuration information must be available to individual configuration nodes (e.g.: Scheduler must be able to retrieve a list of all nodes and that list must be automatically updated when new nodes are added).
  • Administrators must be able to centrally override global configuration on a individual, rack and zone basis.
  • Scripts must be able to identify other nodes and find which roles they were executing
  • Must be able to handle non-OS components such as networking, VLANs, load balancers, and firewalls.

 

Phase 3: Upgrade / Redeployment

The ultimate objective for Phase 3 is to foster a continuous deployment capability in which updates from OpenStack can be frequently and easily implemented in a production environment with minimal risk. This requires a substantial amount of self-testing and automation.

Phase 3 maintains the system when new components arrive. Phase 3 includes the added requirements:

  • rolling upgrades so that system operation is not compromised during a deployment
  • upgrade/patch of modules
  • new modules must be aware of current deployments
  • configuration and data must be preserved
  • deployments may extend the PSM to to pre-stage operations (move data and vms) before taking action.

 

Ops API

This needs additional requirements.

The objective of the Ops API is to provide a standard way for operations tools to map the internal cloud infrastructure without duplicating discovery effort. This will allow tools that can:

  • create billing data
  • audit security
  • rebalance physical capacity
  • manage power
  • audit & enforce physical partitions between tenants
  • generate ROI analysis
  • IP Address Management (possibly integration/bootstrap with the OpenStack network services)
  • Capacity Planning

 

User Stories

 

Personas:

  • Oscar: Operations Chief
    • Knows of Chef or Puppet. Likely has some experience
    • Comfortable and likes Linux. Probably prefers CentOS
    • Can work with network configuration, but does not own network
    • Has used VMware
  • Charlie: CIO
    • Concerned about time to market and ROI
    • Is working on commercial offering based on OpenStack
  • Denise: Cloud Developer
    • Working on adding features to OpenStack
    • Working on services to pair w/ OpenStack
    • Comfortable with Ruby code
  • Quick: Data Center Worker
    • Can operate systems
    • In charge of rack and replacement of gear
    • Can supervise, but not create automation

 

Proof of Concept (PoC ) use cases

 

Agrees to POC

  • Charlie agrees to be in POC by signing agreements
  • Dell gathers information about shipping and PO delivery
  • Quick provides shipping information to Dell
  • Oscar downloads ISO and VMPlayer image from Dell provided site.

 

Get Equipment Setup to base

Event: The Dell equipment has just arrived.

  • Quick checks the manifest to make sure that the equipment arrived.
  • Quick racks the servers and switch following the wiring chart provided by Oscar
  • Quick follows the installation guides BIOS and Raid configuration parameters for the Admin Node
  • Quick powers up the servers to make sure all the lights blink then turns them back off
  • Oscar arrives with his laptop and the crowbar ISO
  • As per instructions, Oscar wires his laptop to the admin server and uses VMplayer to bootstrap the ISO image
  • Oscar logs into the VMPlayer image and configures base admin parameters
    • Hostname
    • networks (admin and public required)
      • admin ips
      • routers
      • masks
      • subnets
      • usable ranges (mostly for public).
    • Optional: ntp server(s)
    • Optional: forwarding nameserver(s)
    • passwords and accounts
    • Manually edits files that get downloaded.
  • System validates configuration for syntax and obvious semantic issues.
  • System clears switch config and sets port fast and lldp med configuration.
  • Oscar powers system and selects network boot (system may automatically do this out of the “box”, but can reset if need be).
  • Once the bootstrap and installation of the Ubuntu-based image is completed, Oscar disconnects his laptop from the Admin server and connects into the switch.
  • Oscar configures his laptop for DHCP to join the admin network.
  • Oscar looks at the Chef UI and verifies that it is running and he can see the Admin node in the list.
    • The Install guide will describe this first step and initial passwords.
    • The install guide will have a page describing a valid visualization of the environment.
  • Oscar powers on the next node in the system and monitors its progress in Chef.
    • The install guide will have a page describing this process.
    • The Chef status page will have the node arrive and can be monitored from there. Completion occurs when the node is “checked in”. Intermediate states can be viewed by checking the nodes state attribute.
    • Node transitions through defined flow process for discovery, bios update, bios setting, and installation of base image.
  • Once Oscar sees the node report into Chef, Oscar shows Quick how to check the system status and tells him to turn on the rest of the nodes and monitor them.
  • Quick monitors the nodes while they install. He calls Oscar when they are all in the “ready” state. Then he calls Oscar back.
  • Oscar checks their health in Nagios and Ganglia.
  • If there are any red warnings, Oscar works to fix them.

 

Install OpenStack Swift

Event: System checked out healthy from base configuration

  • Oscar logs into the Crowbar portal
  • Oscar selects swift role from role list
  • Oscar is presented with a current view of the swift deployment.
    • Which starts empty
  • Oscar asks for a proposal of swift layout
    • The UI returns a list of storage, auth, proxy, and options.
  • Oscar may take the following actions:
    • He may tweak attributes to better set deployment
      • Use admin node in swift
      • Networking options …
    • He may force a node out or into a sub-role
    • He may re-generate proposal
    • He may commit proposal
  • Oscar finishes configuration proposal and commits proposal.
  • Oscar may validate progress by watching:
    • Crowbar main screen to see that configuration has been updated.
    • Nagios to validate that services have started
    • Chef UI to see raw data..
  • Oscar checks the swift status page to validate that the swift validation tests have completed successfully.
  • If Swift validation tests fail, Oscar uses troubleshooting guide to correct problems or calls support.
    • Oscar uses re-run validation test button to see if corrective action worked.
  • Oscar is directed to Swift On-line documentation for using a swift cloud from the install guide.

 

Install OpenStack Nova

Event: System checked out healthy from base configuration

  • Oscar logs into the Crowbar portal
  • Oscar selects nova role from role list
  • Oscar is presented with a current view of the nova deployment.
    • Which starts empty
  • Oscar asks for a proposal of nova layout
    • The UI returns a list of options, and current sub-role usage (6 or 7 roles).
    • If Oscar has already configured swift, the system will automatically configure glance to use swift.
  • Oscar may take the following actions:
    • He may tweak attributes to better set deployment
      • Use admin node in nova
      • Networking options …
    • He may force a node out or into a sub-role
    • He may re-generate proposal
    • He may commit proposal
  • Oscar finishes configuration proposal and commits proposal.
  • Oscar may validate progress by watching:
    • Crowbar main screen to see that configuration has been updated.
    • Nagios to validate that services have started
    • Chef UI to see raw data..
  • Oscar checks the nova status page to validate that the nova validation tests have completed successfully.
  • If nova validation tests fail, Oscar uses troubleshooting guide to correct problems or calls support.
    • Oscar uses re-run validation test button to see if corrective action worked.
  • Oscar is directed to Nova On-line documentation for using a nova cloud from the install guide.

 

Pilot and Beyond Use Cases

 

Unattended refresh of system

This is a special case, for Denise.

  • Denise is making daily changes to OpenStack’s code base and needed to test it. She has committed changes to their git code repository and started the automated build process
  • The system automatically receives that latest code and copies it to the admin server
  • A job on admin server sees there is new code resets all the work nodes to “uninstalled” and reboots them.
  • Crowbar reimages and reinstalls the images based on its cookbooks
  • Crowbar executes the test suites against OpenStack when the install completes
  • Denise reviews the test suite report in the morning.

 

Integrate into existing management

Event: System has passed lab inspection, is about to be connected into the corporate network (or hosting data center)

  • Charlie calls Oscar to find out when PoC will start moving into production
  • Oscar realizes that he must change from Nagios to BMC on all the nodes or they will be black listed on the network.
  • Oscar realizes that he needs to update the SSH certificates on the nodes so they can be access via remote. He also has to change the accounts that have root access.
  • Option 1: Reinstall.
    • Oscar updates the Chef recipes to remove Nagios and add BMC, copy the cert and configure the accounts.
    • Oscar sets all the nodes to “uninstalled” and reimages the system.
    • Repeat above step until system is configured correctly
  • Option 2: Update Recipes
    • Oscar updates the Chef recipes to remove Nagios and add BMC, copy the cert and configure the accounts.
    • Oscar runs the Chef scripts and inspects one of the nodes to see if the changes were made

 

Implementation

We are offering Crowbar as a starting point. It is an extension of Opscode Chef Server that provides the state machine for phases 1 and 2. Both code bases are Apache 2

Test/Demo Plan

TBD

Rob’s rules for good APIs (with explanations!):

or “being a provider in an age of mass consumption.”

While blogging to abstraction, I kept thinking of ways to I should have avoided those awkward and painful APIs that I’ve written in the past.  Short of a mouthful of bad tasting SOAP, here are my top API considerations.

Before you get started:

  1. Take are to design the semantics of you API because APIs are abstraction that hide complexity from, simplify work of and encourage good behaviors in its consumers.
  2. Think security and auditing first because it’s difficult to “see” all the options that an API will expose on a quick review.  You don’t want deal with the mess when Venkman crosses the streams.
  3. Write tests for your API because you want to find out that you broke it before your consumers do.  Even better, also add simulators that pump data into your API so that you can run system tests.
  4. Use versioned APIs even at 1.0 because you and your consumer must expect APIs to evolve.
  5. Separate verbs (taking actions) from nouns (editing things) in your API because they have fundamentally different use, validation, and notification models.   This also helps eliminate side-effects.
  6. Do not anticipate consumers’ appetite for data because there is a cost for returning information and maintaining compatibility.

Now that you’re committed:

  1. Offer the minimum options to service the consumer’s use-case because it’s easier to add than remove options.
  2. Avoid exposing details to your consumer because it will lock-in your implementation and limit your flexibility.
  3. Use natural keys over system assigned keys because users will have to make extra calls to figure out your assignments and it will lock your implementation.
  4. Never break your API’s existing calls when adding options because you never really know who is using which API calls.
  5. Object versioning (timestamping is enough!) reduces confusion because you can’t control how long a user will hang onto an object before they decide to send it back to you.
  6. Fail loudly with obvious delight because you want to make it hard for bad data to get into your system and you also consumers to know what went wrong. 
  7. And, when failing loudly use HTTP error codes correctly because they really do mean something.
  8. Finally, be text tolerant because strong typing makes your API harder to use and text is very robust and flexible.

Whew.  Now for a REST.